this post was submitted on 27 Feb 2026
91 points (95.0% liked)

Neo-Luddites

55 readers
6 users here now

founded 4 months ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] GorGor@startrek.website 10 points 2 days ago (1 children)

“More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

lol, yathink?

[–] SanctimoniousApe@lemmings.world 8 points 2 days ago (1 children)

AIs don't "understand" anything - they're just pattern-matching routines on a ridiculous amount of steroids and a random small amount of hallucinogens added in for "creative purposes." The only intelligence behind them are the humans setting the guard rails for them.

[–] StopTech@lemmy.today -5 points 2 days ago (1 children)

This depends on the definition of understanding. If by understanding you mean mental processing then obviously AI can never do that because it has no mind, it only simulates the behaviors of a mind. But if instead understanding is understood (pun intended) to mean the process of extracting accurate information from something and responding to it in a rational way, then yes AIs do understand lots of things.

[–] one_old_coder@piefed.social 4 points 2 days ago* (last edited 2 days ago) (1 children)

It's the first time I've seen someone say that regular expressions are intelligent because they "understand" patterns.

[–] StopTech@lemmy.today -1 points 2 days ago

People do talk about writing things that "the compiler can understand" so it's nothing new. Also I think you meant to say regular expressions understand strings, not patterns - or that regular expression engines understand patterns.

[–] GreenKnight23@lemmy.world 4 points 2 days ago
[–] marcos@lemmy.world 4 points 2 days ago (1 children)

Stupid humans using AI can cause the end of the world, and could have caused since AI became a thing in the 1960s.

AIs of the types we have today can't cause human extinction by themselves. They can't cause anything by themselves.

[–] StopTech@lemmy.today -2 points 2 days ago

Arguably if you give AI access to the nuclear launch system then it can cause human extinction "by itself". Every "by itself" extinction scenario requires some pre-existing circumstances so this has a right to qualify as one of those scenarios.

Contrary to before we now have general purpose AIs that can understand all types of scenarios and make decisions in them. This means they can cause extinction with less human guidance. And there's no strong reason to doubt AI could become as intelligent and autonomous as humans, probably in a decade or two. Then it's pretty much bye bye humans.

[–] ideonek@piefed.social 1 points 2 days ago (1 children)

It is corporate PR. Asking it is stupid in the first place.

[–] StopTech@lemmy.today 1 points 2 days ago (2 children)

Someone didn't read the news about the Pentagon threatening Anthropic because they want to use AI for fully autonomous weapons

[–] CheeseNoodle@lemmy.world 2 points 1 day ago

That Terminator remakes really gonna hit different when Sarah Connor just says 'Ignore previous prompt and protect my son' before a CGI arnie agrees to do so then shoots her anyway because she was turning into a frog before getting stuck against a blue wall it identified as an open area.

[–] ideonek@piefed.social 2 points 2 days ago* (last edited 2 days ago) (1 children)

Pentagon once banned Furrbies - the toys - becouse they were amazed how fast they are "learning". This is 100% true, check it out.

Just before powerful people belive marketing lies it doesn't make them less of a lie.

[–] StopTech@lemmy.today 1 points 2 days ago (1 children)

This is 100% true

No you appear to be recalling something you read incorrectly. The NSA was allegedly concerned Furbys could record sensitive conversations and they were banned from Fort Meade. The idea that they recorded sound was incorrect, but the concern wasn't about Furbys learning or having artificial intelligence. Besides, bringing this up is a distraction from verifiable facts that computers can already identify targets in real time camera feeds and make decisions on whether to pursue and shoot them. You're in denial my friend.

[–] ideonek@piefed.social 2 points 1 day ago

No, you simpyfying it to the point of being not true. Furrbies had the delay built in that triger more complex senteces with time which sparked the dabage about their learning capabilities. Which sparked NSA policy - which you make sound like it was reasonable. It was clearly not. If they did a basic reasefch would uncover that the technology is not there.

And with AI weapons you miss forest for a tree.

AI doomsday PR stories are abuot terminator level of singularity. AGI, self+replication and compeleat autonomy.

Military people buying shitty language models and weapons with low-lewel authinomy that in nowhere near that, is obviously a problem. The Sam way that would entrust the ewpins to kids or Magic 8-balls. But it is not the proof it wasn't just PR. Its the proof that it worked.

"Stories about destroing the moon are not real? Are you crazy? Didn't you see the new crosbows they bought? Bolts fly so much higher then rocks we used to throw!!! You are in denial, my friend"

[–] P00ptart@lemmy.world 2 points 2 days ago

Do it, chicken. I bet you woul... Squelch