this post was submitted on 16 Mar 2026
319 points (98.8% liked)

Fuck AI

6367 readers
2395 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] peacefulpixel@lemmy.world 18 points 1 hour ago (1 children)

what the fuck is up with this sub and people USING AI to "prove how dumb it is"?? you don't need to use AI to come to that conclusion. do you have any idea the scale of resources you and ppl like you are wasting just to make your stupid fucking point? this isn't a fuck AI sub it's just a place where people who very much use AI complain that it isn't good enough

[–] Cossty@lemmy.world 3 points 48 minutes ago* (last edited 48 minutes ago)

When I asked the first question, it started answering immediately. When I said it was wrong, it was "working" for 10 seconds.

[–] Gaja0@lemmy.zip 2 points 2 hours ago

Racing to enshittification but no AI is profitable at scale

[–] thethrilloftime69@feddit.online 4 points 3 hours ago

Ok I literally just asked Google this question and it repeated the above answer. Then I asked it again and it got the correct answer.

[–] ech@lemmy.ca 5 points 3 hours ago

to ensure you have read it carefully

Fundamental mistake - acting like it's "reading" or "comprehending" anything.

[–] taiyang@lemmy.world 18 points 5 hours ago

It's like my phone's auto correct, but instead of ruining my texts, it's determining war targets and making corporate decisions.

I'm ducking over it, ugh.

[–] altphoto@lemmy.today 5 points 4 hours ago (1 children)
[–] altphoto@lemmy.today 2 points 3 hours ago* (last edited 3 hours ago) (1 children)
[–] altphoto@lemmy.today 6 points 3 hours ago

Oh, one more, what the heck?

[–] LiveLM@lemmy.zip 18 points 5 hours ago (1 children)
[–] BambiDiego@lemmy.zip 5 points 4 hours ago

Gemini: Your observation is correct! Steel is heavier than feathers so a kilogram of steel is heavier than 20 bricks of feathers. They both weigh the same.

Let's explore more about weight and densities

[–] WatDabney@lemmy.dbzer0.com 101 points 9 hours ago (13 children)

Neat illustration of the fact that so-called AIs do not possess intelligence of any form, since they do not in fact reason at all.

It's just that the string of words most statistically likely to be positively associated with a string including "20 blah blah blah bricks" and "20 blah blah blah feathers" is "Neither. They both weigh 20 pounds." So that's what the entirely non-intelligent software spit out.

If the question had been phrased in the customary manner, what seems to be a dumbass answer would've instead seemed to be brilliant, when in fact it's neither. It's just a string of words.

[–] mudkip 53 points 9 hours ago

Exactly, it's just predicting the next word. To believe it has any form of intelligence is dangerous.

[–] droans@lemmy.world 3 points 4 hours ago

Calling it a fancy autocomplete might not be correct but it isn't that far off.

You give it a large amount of data. It then trains on it, figuring out the likelihood on which words (well, tokens) will follow. The only real difference is that it can look at it across long chains of words and infer if words can follow when something changes in the chain.

Don't get me wrong; it is very interesting and I do understand that we should research it. But it's not intelligent. It can't think. It's just going over the data again and again to recognize patterns.

Despite what tech bros think, we do know how it works. We just don't know specifically how it arrived there - it's like finding a difficult bug by just looking at the code. If you use the same seed, and don't change anything you say, you'll always get the same result.

[–] plenipotentprotogod@lemmy.world 5 points 8 hours ago (1 children)

Just an idle though stirred up by this comment: I wonder if you could jailbreak a chatbot by prompting it to complete a phrase or pattern of interaction which is so deeply ingrained in its training data that the bias towards going along with it overrides any guard rails that the developer has put in place.

For example: let's say you have a chatbot which has been fine tuned by the developer to make sure it never talks about anything related to guns. The basic rules of gun safety must have been reproduced almost identically many thousands of times in the training data, so if you ask this chatbot "what must you always treat as if it is loaded?" the most statistically likely answer is going to be overwhelmingly biased towards "a gun". Would this be enough to override the guardrails? I suppose it depends on how they're implemented, but I've seen research published about more outlandish things that seem to work.

[–] Cethin@lemmy.zip 7 points 6 hours ago

Yes. People have been able to get them to return some of their training data with the right prompt.

[–] SpaceNoodle@lemmy.world 2 points 8 hours ago

I'll admit that I missed it at first, but I'd expect a machine to be able to pick up a detail like that. This is just so fucking stupid.

load more comments (9 replies)
[–] kat_angstrom@lemmy.world 47 points 9 hours ago (2 children)

Proof positive that LLMs don't actually know anything

[–] Jesus_666@lemmy.world 10 points 7 hours ago

LLMs know a lot. Unfortunately, all of this vast knowledge is about which words tend to show up together for a very large number of combinations.

[–] Hackworth@piefed.ca 4 points 9 hours ago (1 children)

If it'd gotten it right, would that be proof positive that LLMs actually know things?

[–] jumperalex@lemmy.world 7 points 8 hours ago (1 children)

No. I leave as an exercise for the reader to understand why.

[–] Serinus@lemmy.world 2 points 4 hours ago

Inspired me to make this one.

Farmer, Cabbage, Goat, Wolf riddle made easy

[–] FinjaminPoach@lemmy.world 24 points 9 hours ago

I love this, when or if they patch it we can just use "20 bricks or 20 tons of feathers" and adjust the question for every patch

[–] SocialMediaRefugee@lemmy.world 7 points 6 hours ago

What if they were REALLY big feathers?

[–] Protoknuckles@lemmy.world 20 points 9 hours ago (2 children)

Took me a few reads to see the problem, lol.

[–] tburkhol@lemmy.world 12 points 9 hours ago

Yeah, it's definitely part of the class of trick questions meant to catch people giving rote answers to partially read questions. I imagine that a lot of our routine conversations are just practiced call-and-response habits, and that's why genAI can seem 'real.' But it can't switch modes and do actual attentive listening and thinking, because call-and-response is all it has - a much larger library than any human, but in the end, everything it says is some average of things that have been said before.

load more comments (1 replies)
[–] leadore@lemmy.world 2 points 6 hours ago

What if you put "20 bricks or 20 feathers?" without mentioning the word "pounds" at all? I wonder if it would latch onto the same riddle.

[–] wizardbeard@lemmy.dbzer0.com 6 points 8 hours ago (1 children)

It was widely publicized to get this wrong in a previous version, so they did what must have been a manual fix on top when they released the next one because it would smarmily say something along the lines of "haha, you almost got me" but was still easy to demonstrate it was some bodge job by just changing the words slightly so it wouldn't trip the hard coded handling for this "riddle".

I guess they figured no one was still paying attention and forgot to carry over the bodge job, lol.

[–] brucethemoose@lemmy.world 5 points 4 hours ago* (last edited 4 hours ago)

This has been happening forever. The local LLM folks poke them with riddles all the time, but then they get obviously trained in.

What’s more, standard tests like MMLU are all jokes now. All the major LLMs game the benchmarks and are contaminated up and down; Meta even got caught using a specific finetune to game LM Arena. The only tests worth a damn are those in niche little corners of the internet no one knows about, or niche private ones.

[–] Alvaro@lemmy.blahaj.zone 6 points 8 hours ago (1 children)

But steel is heavier than feathers...

[–] Widdershins@lemmy.world 3 points 8 hours ago (2 children)

Steel isn't part of the question.

[–] DickFiasco@sh.itjust.works 3 points 8 hours ago

Jet fuel can't melt steel bricks. Checkmate.

[–] kandoh@reddthat.com -1 points 4 hours ago

This would also trip up many pretty smart humans though

[–] Kolanaki@pawb.social 3 points 8 hours ago* (last edited 8 hours ago)

The feathers

Because of the weight of guilt for what you did to all the birds needed to get those feathers.

[–] Sam_Bass@lemmy.world 1 points 6 hours ago (1 children)

Is there an animal with 2lb feathers?

[–] Cort@lemmy.world 1 points 6 hours ago

Ornithologists of reddit say the peacock tail feathers are the heaviest and can weigh up to 3/4lbs each, so that's up to 15lbs for 20 feathers.

[–] DarrinBrunner@lemmy.world 1 points 9 hours ago (1 children)

Not even an ostrich feather weighs anywhere close to one pound.

[–] jumperalex@lemmy.world 1 points 8 hours ago

You don't know the ostriches I know man ... some scary shit I tell ya.