this post was submitted on 06 Apr 2025
2 points (66.7% liked)

Philosophy

1443 readers
6 users here now

Discussion of philosophy

founded 2 years ago
MODERATORS
 

Has anyone attempted to build moral principles into an AI model?

you are viewing a single comment's thread
view the rest of the comments
[–] voracitude@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (2 children)

Ethics, not morals, but yes, this is a core part of "alignment" or making sure the machine wants the same thing we do. It turns out alignment is really hard because ethics is really hard. The classic AI doomsday story is based on an AI that took utilitarianism as the highest end goal (the best way to save humanity is to destroy it); that's an ethical framework used to justify genocide.

So shorter answer: "Yes, but ethics is hard". I really like Robert Miles' videos on this topic; here's one to get you started: https://www.youtube.com/watch?v=ZeecOKBus3Q

And here's another very related one for after: https://www.youtube.com/watch?v=hEUO6pjwFOo

[–] daveB@sh.itjust.works 2 points 20 hours ago (1 children)

Thank you for your reply and I think you are in the same headspace as I am on this topic. I will check out the links you posted. A side note: what is the difference between ethics and morals?

[–] voracitude@lemmy.world 1 points 3 hours ago* (last edited 48 minutes ago)

My pleasure!

Morality has a religious basis and ultimately comes down to "God's will is how things should be, so how close is it to God's will?"

Meanwhile, ethics is secular and tries to answer questions of "right" and "wrong" in the absence of an omnipotent power.