this post was submitted on 16 Jun 2025
85 points (100.0% liked)

TechTakes

1977 readers
237 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] rook@awful.systems 5 points 1 day ago (1 children)

Some people casting their eyes over this monster of a paper have less than positive thoughts about it. I’m not going to try and summarise the summaries here, but the threads aren’t long (and are vastly shorter than the paper) so reading them wouldn’t take long.

Dr. Cat Hicks on mastodon: https://mastodon.social/@grimalkina/114690973548997443

Ashley Juavinett on bluesky: https://bsky.app/profile/analog-ashley.bsky.social/post/3lru5sua3fk25

[–] froztbyte@awful.systems 4 points 1 day ago (1 children)

also why I was reserved in my wording (I am, at best, "armchair enthusiast" level of clued on detailed neuroscience)

it's so damn messy though. here's some concurrent (and/or semi-sequenced branching) thoughts/opinions:

  • there's enough people getting high on LLMs (et al) that it is morally and ethically worthwhile to investigate the implications thereof
  • it's extremely fucking hard to objectively quantify this
  • we should still try
  • it seems like there's a hell of a need for funding for the applicable research fields of human study ito figuring out the dynamics of this shit
  • ...wow wouldn't it be nice if they got even 3% of the openai grift budget
[–] rook@awful.systems 5 points 1 day ago (1 children)

It isn’t clear to me at this point that such research will ever be funded in english-speaking places without a significant set of regime changes… no politician or administrator can resist outsourcing their own thinking to llm vendors in exchange for funding. I expect the US educational system will eventually provide a terrible warning to everyone (except the UK, whose government looks at the US and says “oh my god, that’s horrifying. How can we be more like that?”).

I’m probably just feeling unreasonably pessimistic right now, though.

[–] froztbyte@awful.systems 3 points 1 day ago

believe me, I hear ya on that

[–] Architeuthis@awful.systems 11 points 3 days ago

Anecdotally, it took like one and a half week from the c-suite okaying using copilot to people beginning to consider googling beneath them and to start elevating to me the literal dumbest shit just because copilot was having a hard time with it.

[–] HedyL@awful.systems 35 points 4 days ago (2 children)

Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.

[–] hydroptic@sopuli.xyz 25 points 4 days ago (1 children)

Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc.

Show me a study where they find typewriter users "consistently underperformed at neural, linguistic, and behavioral levels"

[–] HedyL@awful.systems 18 points 4 days ago (1 children)

No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It's important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn't have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.

But I admit that this is not comparable to chatbots.

[–] hydroptic@sopuli.xyz 4 points 4 days ago (2 children)

… what.

This article is about a scientific study that shows clear differences in brain activity between people who used LLMs and those who didn't. If you can't tell the difference between that and whatever the hell you're going on about, you might want to cut down on the LLM usage.

[–] Jrockwar@feddit.uk 22 points 4 days ago (1 children)

Maybe do some self reflection first? You're missing their point and it's bigger than Saturn.

[–] HedyL@awful.systems 13 points 4 days ago

LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).

Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that's bullshit, because LLMs just aren't capable of doing any of these things in a meaningful way.

[–] BlueMonday1984@awful.systems 29 points 3 days ago

Posted this on a Discord I'm in - one of the near immediate responses was "I'm glad they made a non-invasive procedure to lobotomise people".

Nothing more to add, I just think that's hilarious

[–] fullsquare@awful.systems 23 points 3 days ago

chatbots really are leaded gasoline for zoomers

[–] blakestacey@awful.systems 18 points 3 days ago

From p. 137:

The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one's own essay. LLM users significantly underperformed in this domain, with 83% of participants (15/18) reporting difficulty quoting in Session 1, and none providing correct quotes. This impairment persisted albeit attenuated in subsequent sessions, with 6 out of 18 participants still failing to quote correctly by Session 3. [...] Search Engine and Brain-only participants did not display such impairments. By Session 2, both groups achieved near-perfect quoting ability, and by Session 3, 100% of both groups' participants reported the ability to quote their essays, with only minor deviations in quoting accuracy.

[–] blakestacey@awful.systems 8 points 3 days ago
[–] vane@lemmy.world 2 points 2 days ago

Stupid people are easier to control.