this post was submitted on 13 May 2026
454 points (97.3% liked)

LinkedinLunatics

6831 readers
491 users here now

A place to post ridiculous posts from linkedIn.com

(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] 4am@lemmy.zip 24 points 1 day ago* (last edited 1 day ago) (5 children)

Image recognition to help radiologists find tumors is probably fine; especially since you can usually run those models locally.

These morons think ChatGPT is “conscious” and “was trained on humanity’s collective knowledge”. THAT is the problem with ~~AI Derangement Syndrome~~

EDIT: aw fuck let’s not use that acronym

[–] vinceman@lemmy.blahaj.zone 2 points 16 hours ago

That edit is a absolute 10/10 I burst out laughing

[–] Contramuffin@lemmy.world 10 points 1 day ago (1 children)

AI (statistical predictive models) work best when it's designed for a specific purpose and when the model is too challenging to derive by hand. Detecting tumors is a specific purpose, and doing so manually is challenging enough that it requires specific training. It gets a pass by me.

Predicting protein structures/drug effects: specific purpose, check. Doing it manually, yep, very challenging. Good use of AI.

LLM chatbot: purpose is unclear. Making a non-AI-based chatbot is easy and has been done before. Verdict: useless technology

[–] vaultdweller013@sh.itjust.works 3 points 21 hours ago

Or to put it another way, use the right tool for the job don't use the shitty multi tool that does every job passably at best. The only exception to this rule of thumb is the humble spork, but that's a piece of engineering genius that couldn't be replicated by AI pushers.

[–] Windex007@lemmy.world 9 points 1 day ago* (last edited 1 day ago)

There are a bunch of studies that in general show there is an effect where, despite what people say and think, they inevitably start to offload decision making to AI inappropriately and it eventually makes them worse. Harvard did a study specifically around radiologists, interestingly enough.

The "only use it as an aid" seems to be a myth.

To me it seems very similar to cocaine.

[–] youcantreadthis@quokk.au 5 points 1 day ago

If you aren't conscious its easy to think chatgpt is

[–] Amberskin@europe.pub 1 points 22 hours ago

That’s deep learning, and it’s a well known and understood statistical tool.