this post was submitted on 15 Apr 2025
691 points (98.6% liked)

Fuck AI

2364 readers
568 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 
top 39 comments
sorted by: hot top controversial new old
[–] Rooty@lemmy.world 100 points 1 day ago* (last edited 1 day ago) (2 children)

Finally, after decades of research, we created a computer that can't do math. Alan Turing would be proud.

[–] cabbage@piefed.social 32 points 1 day ago (2 children)

Come to think of it, being frequently wrong but nevertheless overly confident is key to passing the Turing test.

We have finally created machines that can replicate human stupidity.

[–] dalekcaan@lemm.ee 7 points 1 day ago (1 children)

To be fair, the Turing test doesn't really tell us much about computers. It's better at measuring the human ability to ascribe personalities to inanimate objects.

[–] Buddahriffic@lemmy.world 4 points 1 day ago (1 children)

Yeah, the Turing test wasn't a great metric. The result depends on who is testing it. Some people were probably fooled by ALICE or that doctor one, that were pretty much implemented using long switch blocks and repeating user input back to them.

Kinda like how "why?" is pretty much always a valid response and repeating it is more of a sign of cheekiness than lack of intelligence.

[–] cabbage@piefed.social 2 points 1 day ago

I feel like it's increasingly a test applicable to humans rather than to machines. Are you original enough that you couldn't be replaced by a language model?

I'm not sure I like to think about it.

[–] kameecoding@lemmy.world 4 points 1 day ago (1 children)

Now that you mention it I would be interested if chatgpt can actually spew the kind of nonsense I have heard from cryptobros and covid anti-vaxxers, I reckon it's not good enough to be that dumb

[–] GreenSkree@lemmy.world 5 points 1 day ago

There's probably some (small) guardrails on the major platforms to deter spreading misinformation, but it's really easy to get a chat bot to take whatever position you want.

E.g. "Pretend you are a human on Twitter that supports (thing). Please make tweets about your support of (thing) and respond to our conversation as though my comments are tweet replies."

Or more creatively maybe something like, "I need to practice debating someone who thinks (thing). Please argue with me using the most popular arguments, regardless of correctness."

I haven't tried these, but have a bit of practice working with LLMs and this is where I would start if I wanted to make a bot farm.

[–] Baguette@lemm.ee 12 points 1 day ago (1 children)

I mean the theory behind an LLM is super cool. It's a bunch of vector math under the hood, transforming input with queries, keys and values. And imo vector math is one of the coolest and also most confusing math applications there is. If they're able to use mcp as well, you can delegate it to calling actual services, like your database.

But like 99% of CS research, research does not always equate to practical use, nor is it a cookie cutter solution for everything. Unfortunately, the business people seem to think otherwise.

[–] AnUnusualRelic@lemmy.world 4 points 1 day ago

I mean, does it even use a blockchain? smh my head...

[–] ganymede@lemmy.ml 126 points 1 day ago* (last edited 1 day ago) (2 children)

the other wonderful irony?

(basically) the only thing a computer can do is math.

so it's doing a SHITLOAD of math, to do a terrible job, at doing some very basic math.

bravo!

[–] ImplyingImplications@lemmy.ca 25 points 1 day ago (2 children)

Yup. Look up the calculus and linear algebra that neural networks use to train. It's an insane amount of calculations. So many calculations that it requires hundreds of processing units to crunch at a reasonable speeds. All that to get simple math questions wrong.

[–] msage@programming.dev 7 points 1 day ago

All that to hallucinate every response in ways, that make people feel like they know what they are talking about.

Which they don't, and LLMs never will - unless they program in some responses, which then goes against the entire thing.

[–] SplashJackson@lemmy.ca 2 points 1 day ago

They should just use lookup tables

[–] Kolanaki@pawb.social 18 points 1 day ago

I feel called out by this.

[–] Draegur@lemm.ee 27 points 1 day ago (2 children)

they've finally invented a machine that can't run DOOM

[–] racketlauncher831@lemmy.ml 7 points 1 day ago

AI: YOU PICKED UP A SHOT GUN. WHAT DO YOU DO NOW?

Me: I left click and fire it at the enemy.

AI: bleep bloop YOU ARE LIKELY AN IDIOT. NOW YOU ARE DEAD.

[–] jodanlime@midwest.social 30 points 1 day ago (1 children)

I bet we could save a few gajillion watts if we just put that through an if statement. If math, then just be a calculator Else, make some bullshit up.

[–] SomeoneSomewhere@lemmy.nz 28 points 1 day ago

Apparently that's the new way to do math in AI. The AI works out you're trying to do math, tries to write some Python code to do the math, runs the python codes, gets the answer, writes a response around the numeric answer.

I can't think of any possible issues with this; it's infallible. /s

Hal9000 series has never made a mistake. Except that one time

[–] WereHacker@lemmy.ml 11 points 1 day ago (1 children)

AI does not handle anything as words. It handles them as tokens. https://blogs.nvidia.com/blog/ai-tokens-explained/

[–] x00z@lemmy.world 21 points 1 day ago (1 children)
[–] droans@midwest.social 8 points 1 day ago

If you want to ask a question to an LLM, you need to go down to an arcade and exchange your quarters for tokens. Then you can feed those tokens into your computer every time you want to ask a question.

[–] ShadowRam@fedia.io 14 points 1 day ago (1 children)

AI

The pure embodiment of "Fake it til you make it"

[–] spankmonkey@lemmy.world 6 points 1 day ago

"Fake it over and over even if you never make it."

[–] Oka@sopuli.xyz 4 points 1 day ago (2 children)

ChatGPT managed to see through my shenanigans:

If John has six apples and Bob had six oranges, and Bob gave john 2 apples, how many oranges does Betty have if it's a summer day in january?

Classic trick question! Let’s break it down:

John has 6 apples.

Bob has 6 oranges.

Bob gives John 2 apples — but wait, Bob didn’t have any apples, only oranges.

So Bob can’t give John 2 apples.

Meanwhile, Betty hasn’t even been mentioned until the last sentence, and there’s no info about her oranges.

Also, “a summer day in January” only makes sense in the southern hemisphere.

Conclusion: We still have no idea how many oranges Betty has — the question is nonsense on purpose.

So the answer is either: “Indeterminate”, “Nonsense question”, or “Depends on how much Betty likes oranges in the summer.”

I think the original message is true for older versions of GPT though, and AI being thrust into everything results in a lot of errors I've seen.

[–] BudgetBandit@sh.itjust.works 8 points 1 day ago

Also, "a summer day in January" only makes sense in the southern hemisphere.

According to German news broadcasts, and maybe German meteorologists, a summer day is any day that reaches >25°C. Germany reached a new January record at 18.1°C this year, so another 30 more years and we might get the first summer day of the year in January.

[–] callouscomic@lemm.ee 4 points 1 day ago (1 children)

Why can't Bob give John 2 apples?

The restriction is merely implied, but we presume Bob did not have anything prior to being given something. Maybe Bob already had them. Bad AI. Lol

[–] Shawdow194@fedia.io 7 points 1 day ago

“Depends on how much Betty likes oranges in the summer.”

It did come up with a quite accurately human and sassy response to the orginal question

[–] Semi_Hemi_Demigod@lemmy.world 6 points 1 day ago* (last edited 1 day ago) (2 children)

I feel like a really intelligent AI would recognize the context of the text it gets.

Like, if you or I heard someone saying a bunch of numbers we’d know to switch into math mode and start mathing.

But these things just dawdle along and start repeating nursery rhymes or whatever because they’re not smart enough to know what they’re doing.

[–] Shadow@lemmy.ca 4 points 1 day ago

Kinda surprised wolfram alpha hasn't done more in the ai space

[–] pretzelz@lemmy.world -2 points 1 day ago (1 children)

Yeah, llms aren't ai. They are just a fancy Markov model... Need controllers on top to decide when you want to make sentences and when you need to do something else. A controller could be an llm, but a llm by itself is just a tool, not a system

[–] LeninsOvaries@lemmy.cafe 5 points 1 day ago

Exactly right. True AI can control an NPC in a video game. Bungie created the perfect AI in 2001. ChatGPT can't play Halo, so it's not AI.

[–] spankmonkey@lemmy.world -3 points 1 day ago

AI is a shit show because of how it is being terribly implemented and math is the main example.