this post was submitted on 03 Sep 2025
242 points (93.2% liked)

Fuck AI

6621 readers
1230 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] fartographer@lemmy.world 47 points 7 months ago (2 children)

Just wanna point out that every time something scares you enough, it also reprograms/rewires your brain. Not trying to discredit the study, but the reprogramming really isn't the concern; it's if the reprogramming is beneficial, which this isn't.

[–] ricecake@sh.itjust.works 15 points 7 months ago

The study appears to be saying something different from what the headline implies.

Basically it might be better to say that using an LLM doesn't require you to think as hard, you remember less of the essay, and when you go back to rewrite a previous essay without the LLM you have more trouble.

They also noted that for some people, using the LLM made them learn much better. Basically the difference between getting it to write for you and using it as a tool to structure information.
One reduced cognitive load from all sources, and the other reduced load relating to integrating different information sources.

Basically it was a proper study by people who knew what they were doing. They never actually said anything about rewiring.

[–] o0evillusion0o@sh.itjust.works 3 points 7 months ago

This comment just reprogrammed my brain after the reprogramming it got from that post 😮

[–] ricecake@sh.itjust.works 29 points 7 months ago (5 children)

The name and presentation of that site has a veneer of legitimacy, but it really doesn't seem credible.

I warned about this for the past 3 years. The WHO wants universal mental health care and to drug at least a billion of us.

Do Viruses Exist?

There's also a lot of general antivax stuff.

Now, sharing a lot of ... Questionable articles... Doesn't make the article in question invalid. It does however call into extreme doubt any editorial context the site might be adding.

https://arxiv.org/pdf/2506.08872

This is the actual study being referenced. It's conclusions are significantly less severe than this presents them as, while still conveying "LLMs are not generally the best tool for facilitating education".

trade-off highlights an important educational concern: AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.

from an educational standpoint, these results suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration. The corresponding EEG markers indicate this may be a more neurocognitively optimal sequence than consistent AI tool usage from the outset

Ultimately, this isn't saying AI tools cause brain damage or make you stupid. It's saying that learning via LLM often causes worse retention of the information being learned. It also says that search engines and LLMs can remove certain types of cognitive load that are not conducive to retention, making learning easier and faster in some cases where engagement can be kept high.

It's important to be clear and honest about what a study is saying, even if it's not as unequivocally negative as the venue might appreciate.

[–] technocrit@lemmy.dbzer0.com 3 points 7 months ago* (last edited 7 months ago)

Well yeah thanks. The headline is an obvious lie so that's kind of a red flag.

load more comments (4 replies)
[–] zqwzzle@lemmy.ca 21 points 7 months ago (1 children)

Amusingly using what appears to be an AI generated image.

[–] bizza@lemmy.zip 20 points 7 months ago (3 children)

Clicked the article to give it a read, saw the slop they're using right next to the text, laughed, closed the damn thing

[–] paulbg@programming.dev 14 points 7 months ago (1 children)

using AI images in an article about AI use leading to cognitive decline gotta be crazy💀

[–] prole@lemmy.blahaj.zone 4 points 7 months ago (1 children)

Yeah but what do you expect them to do, actually pay a human to make sure they don't do that?

[–] bitjunkie@lemmy.world 6 points 7 months ago (1 children)

Or just survive on the merit of the text content?

[–] hexagonwin@lemmy.sdf.org 1 points 7 months ago

honestly it's hard to tell if the text is also ai slop before reading it fully. and I usually don't have much time to waste on shitty articles, so i just skip those with ai slop images.

[–] dickalan@lemmy.world 1 points 7 months ago* (last edited 7 months ago) (1 children)

Does context escape your brain, The images are not the focus of this article, the fucking article is you weirdo

[–] bizza@lemmy.zip 3 points 7 months ago (2 children)

You know I’m getting real tired of stupid people being online and thinking they’re allowed to speak to me

[–] bold_atlas@lemmy.world 2 points 7 months ago* (last edited 7 months ago)

Then why do you put that reply button under your post?

[–] dickalan@lemmy.world 1 points 7 months ago
[–] oxysis@lemmy.blahaj.zone 16 points 7 months ago (1 children)

Tech bros stay losing, it is a good day

[–] techt@lemmy.world 6 points 7 months ago

I feel like kids are the primary loss-sufferers here :(

(phrasing there is me trying not to call them losers)

[–] sp3ctr4l@lemmy.dbzer0.com 12 points 7 months ago* (last edited 7 months ago)

Hooray!

We invented cyberpsychosis, for reals!

Isn't it so cool to live in a cyberpunk dystopia!?!

Brb, gonna go OD on some early access/preview alpha braindances!

[–] Zephorah@discuss.online 6 points 7 months ago

There’s a good body of research on cognitive capacity and creativity in regard to enriching environments. Even down to rats. Give rats playgrounds and toys and they perform better at memory tasks and solving puzzles.

I suppose you could train rats to press a button to get a human to come solve problems for them. Take the human away, then what?

What’s insidious here is the same over scheduled kids, having their childhoods choreographed for enrichment, are often the ones coming out of childhood with critical thinking, anxiety, and socialization deficits, we think because they’re using their phones for every problem solve.

[–] ZDL@lazysoci.al 3 points 7 months ago (10 children)

There are a million legit reasons to avoid and despise LLMs, their makers, and their pushers. I don't think this is one of them.

Literally every piece of technology introduced in the past thousand years has had this kind of hue and cry built up around it, beginning with the printing press and books in Europe. Every form of communication or information technology has had "studies" (or what passed for them in ages past) claiming that the new technology would ruin the minds and morals of people who used it. Remember when television would "rot kids' minds"? Remember when the Internet was going to end civilization as we know it?

This study is just more of the same. You'll find equivalent studies about television back in the '50s to even as late as the '70s.

There are (a myriad of) good arguments for despising LLMs. This (not yet peer-reviewed) MIT study is not one of them. (And I should point out that the actual paper instead of this summary of it has quite a bit more nuance than is reported in the linked article.)

[–] ricecake@sh.itjust.works 7 points 7 months ago

The study itself is entirely benign, and I'd actually accept it as a reason to eschew AI in an educational context. Their conclusion is basically "if you use an LLM to write an essay you tend to not retain the information as well", which is... Downright boring in how reasonable it is. Particularly given the converse observation I wouldn't have expected: if you are already familiar with a subject then using an LLM to write an essay can strengthen your understanding.

The "journal" this summary of the study was shared in is quackery, so I'm not surprised they distorted the findings.

[–] queermunist@lemmy.ml 1 points 7 months ago (1 children)
[–] ZDL@lazysoci.al 2 points 7 months ago (4 children)

Oh, are we playing a game of non sequitur? OK. My move is:

炮二平五

Your move.

load more comments (4 replies)
load more comments (8 replies)
[–] kibiz0r@midwest.social 2 points 7 months ago

Related post from when this was initially published: https://midwest.social/post/30021473

[–] stevedice@sh.itjust.works 1 points 7 months ago

Oh, baby, did you read the thing? Something tells me you didn't read the thing.

load more comments
view more: next ›