this post was submitted on 14 Dec 2025
95 points (99.0% liked)
memes
23620 readers
435 users here now
dank memes
Rules:
-
All posts must be memes and follow a general meme setup.
-
No unedited webcomics.
-
Someone saying something funny or cringe on twitter/tumblr/reddit/etc. is not a meme. Post that stuff in /c/slop
-
Va*sh posting is haram and will be removed.
-
Follow the code of conduct.
-
Tag OC at the end of your title and we'll probably pin it for a while if we see it.
-
Recent reposts might be removed.
-
No anti-natalism memes. See: Eco-fascism Primer
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Crazy how we aren't able to make moral AIs yet. It's incredibly difficult to make any ML system reliability serve an intended goal without goal mis-specification issues (Gemini has gone as far to identify if it's being tested and "lie" to examiners when it detects that). Additionally, we can't really quantify human morality into simple goals (and even then, those goals do not align with what model maker's goals are) that a machine can interpret either.
We are literally in the middle of a terminology identification crisis in ethics, with both deontology and utilitarianism being identified as 'maxim driven ethical frameworks', with utilitarianism just becoming a subsection of deontology, wherein the ethical maxim focuses on the result, but is still technically a deontological formulation. And these kinds of ideas are being sectioned off from behavioral ethics, which is the study of how humans actually make ethical decisions, which itself posits that ethical frameworks don't actually exist as a way to make decisions, but instead are a way to justify decisions that have been made. Meaning that we are all, for the most part, just post-hoc deontologists.
And this is just general ethics, not even specifically about the morality of specific actions.