this post was submitted on 07 Jul 2025
99 points (73.5% liked)
Technology
72742 readers
1547 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Another Anthropic stunt...It doesn't have a mind or soul, it's just an LLM, manipulated into this outcome by the engineers.
I still don't understand what Anthropic is trying to achieve with all of these stunts showing that their LLMs go off the rails so easily. Is it for gullible investors? Why would a consumer want to give them money for something so unreliable?
The latest We're In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.
As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it's always been kind of clear to me that AI would be more symbolic than connectionist. Of course it's going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just "awaken" once a certain number of connections are made.
Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.
Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can't comprehend. We don't have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.
EDIT: In further response to the article itself, I'd like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.
This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I'm not gonna source it, but we all know that a lot of people don't want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.
Even stripped of all reason, language can convey meaning and emotion. It's why sad songs make you cry, it's why propaganda and advertising work, and it's why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It's not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It's hard coded into the language, they can't be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.
Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.