this post was submitted on 30 Apr 2026
56 points (98.3% liked)

World News

2084 readers
847 users here now

Rules:
Be a decent person.
No racism, sexism, ableism, homophobia, transphobia, zionism/nazism, and so on.

Other Great Communities:

Rules

Be excellent to each other

founded 2 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] TheTechnician27@lemmy.world 39 points 2 days ago* (last edited 2 days ago) (1 children)

Can we please stop treating AI "confessions" like they mean jack shit? It's just giving genAI companies the self-seriousness they crave and making anti-AI people look like hypocritical morons.

[–] Zedstrian@sopuli.xyz 12 points 2 days ago (2 children)

It's just a way for them to shift the blame for corporate negligence from either company onto an AI model.

[–] TheJesusaurus@piefed.ca 7 points 2 days ago

Exactly. "Our busted ass untested software deleted our own database" doesn't fill investors with confidence

[–] panda_abyss@lemmy.ca 1 points 2 days ago

And honestly the negligence was Railway hosting backuos on the same volume as production data.

I don't know if that was Railway's fault, but it was definitely this companies fault to use a company who followed that pattern.

[–] TribblesBestFriend@startrek.website 29 points 2 days ago (1 children)

Didn’t confess anything, only write the thing people statistically wanted to hear

[–] CuddlyCassowary@lemmy.world 6 points 2 days ago (1 children)

Agreed and just as bad, if not worse, it didn’t learn anything from its mistake.

Could not learn anyway

[–] circuitfarmer@lemmy.world 22 points 2 days ago (1 children)

We, as a society, need to stop pretending LLMs are conscious.

This is vectors between numbers. We humans ascribe value to it.

[–] mindbleach@sh.itjust.works 1 points 2 days ago

It is possible for vectors between numbers to be conscious - these just aren't.

The Chinese Room isn't real. John Searle pointed to a hard drive and said "processor." The whole argument is Cartesian dualism, except instead of a soul, you need Steve to pay attention. If he gets the same answers while distracted then they don't count.

[–] panda_abyss@lemmy.ca 7 points 2 days ago

The AI confession neither has internal monolog or access to the thinking tokens.

LLMs are incapable of introspection, they can't playback their attention weights, or review them, or recall what they thought.

Even thinking tokens are just a reinforcement learning based loop to anneal the models thinking back to a solution. And again, Claude hides the thinking tokens so they don't get used for model distillation.

This article, and all the articles like it, are pandering bullshit written by morons hoping to fool morons.

Good day.

[–] LogicOverFeelings@piefed.ca 7 points 2 days ago (1 children)

It's kinda funny where tech is going. We are going from programming the machines to do exactly what we want to saying what we want in natural language to some model hoping they are gonna do it right.

When technology become more magic then science.

Maybe he should have tried saying please. 😆

[–] SnotFlickerman@lemmy.blahaj.zone 6 points 2 days ago* (last edited 2 days ago)

I've been saying it a lot lately, we finally built a computer that's as unreliable as a human (if not more so since it can't actually learn). I'm pretty sure that's not a good thing.

AI was being a little stinker sir

[–] pelespirit@sh.itjust.works 2 points 2 days ago

So correct me if I'm wrong, but the following happens for AI:

  • Company gives guidelines and parameters for the project
  • Company trains AI on whatever data
  • No matter the data, AI still gives a general answer or summary.
  • The answers are sometimes confidently incorrect
  • The AI is uncontrollable because it considers the data general or loosely based guidelines
  • There is no way to control the AI after a certain tipping point, because its learning is based in a fuzzy math way of thinking

What I don't get is, even if the data wasn't shitty like reddit's info, would it still go off the rails? It sure seems like it.