this post was submitted on 14 Aug 2025
111 points (96.6% liked)

Fuck AI

6608 readers
1001 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] paequ2@lemmy.today 20 points 7 months ago
  • Three months before AI was introduced, the adenoma detection rate (ADR) was around 28%.
  • Three months after AI was introduced, the rate dropped to 22% when clinicians were unassisted by AI.
  • The study found that AI did help endoscopists with detection when used, but once the assistance was removed, clinicians were worse at detection.

What a strange place to be. Detection went up with AI, which is good. But, now you're at the mercy of the AI companies, hoping they don't double the price, and if you can't pay, you end up worse than where you started.

Also, this quote stood out to me.

“Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.”

YES. I see this all the time. My coworkers tend to rubberstamp the AI generated code. Which makes sense. If they were too lazy to think through a problem—why would they suddenly be meticulous in fact-checking AI slop?

[–] takeda@lemmy.dbzer0.com 17 points 7 months ago (1 children)

It will have even more severe effects.

One of the best ways to learn critical thinking is wiring essays in class about different subjects. The reason for it is that, you can't just say that you support something because you feel like it, you have to back it up with some evidence.

With people using ChatGPT to write essays, our society becomes dumber. And this at the time when we need critical thinking more than ever.

[–] Catoblepas@piefed.blahaj.zone 5 points 7 months ago

Oh, I’m positive they’ve long since stopped teaching that. At least in poor districts. My youngest sister, Gen Z, was an honors student that literally wasn’t taught anything about how to write an essay, not even your basic 5 paragraph ‘in this essay I will’ essay. My Zillennial middle sister also struggled with essays, but she at least had the idea and just wasn’t great at it.

For comparison I went through the same school system before No Child Left Behind went into effect, and we spent what I remember as at least a few full instruction weeks in English class in high school, if not more, of doing nothing but essays because ‘you’ll need it for college.’ Along with regular essay homework assignments.

[–] skisnow@lemmy.ca 11 points 7 months ago (2 children)

I saw this same article posted over on r/ChatGPT and every single top comment is people saying “so what, why does the doctor need to be skilled if the AI can do it” 🤦‍♂️

[–] Catoblepas@piefed.blahaj.zone 5 points 7 months ago (1 children)

Love to only have access to doctors whose abilities are at the mercy of whether the computer works 😌

[–] tarknassus@lemmy.world 5 points 7 months ago (1 children)

“Is there a doctor in the house?”

“Yes. Let me load up ChatGPT. Ah damn, no signal. Sorry guys.”

[–] Catoblepas@piefed.blahaj.zone 3 points 7 months ago (1 children)

That would never happen! When have you ever been to a doctor’s appointment where the computer and network didn’t work instantly and seamlessly?! 🤪

[–] tarknassus@lemmy.world 2 points 7 months ago

Wait, they’re supposed to work?

cries in UK GP surgeries with broken IT infrastructure

[–] AnarchistArtificer@slrpnk.net 2 points 7 months ago (1 children)

A podcast I listened to recently spoke about failure modes of AI. They used an example of a toll bridge in Denmark where it was impassable recently because it only took card payments, and the payment processing system was down. It would be sensible in this scenario for the failure mode to be for the toll barrier to be open and for them to just let cars through if technical problems means it's impossible for people to pay the toll. Unfortunately, this wasn't the case, and no-one had the ability to manually make the barrier go up. Apparently they ended up having to dismantle the barrier while the payment system was down.

This is very silly, and highlights one of the big dangers of how AI systems are currently being used (even though this particular problem doesn't have AI involved, I don't think, just regular tech problems). The point is that tech can be awesome at empowering us, but we need to think about "okay, but what happens when things go wrong?", and we need to be asking that question in a manner that puts humans at the centre.

That was a far more trivial scenario than the situation described in the article. If AI tools help improve detection rates, then that's awesome. But we need to actually address what happens if those technologies cease to be available (whether because the tools rely on proprietary models, or power outages, or countless other ways that this could go wrong)

[–] skisnow@lemmy.ca 1 points 7 months ago

I suspect the whole problem could be avoided with some judicious UX to force the doctors to make and log their estimations first.

[–] CrayonDevourer@lemmy.world -1 points 7 months ago

What an odd, super lengthy way to say "AI increased cancer detection rates".