this post was submitted on 23 Mar 2025
92 points (88.3% liked)
Futurology
2367 readers
114 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In the medical industry, AI should stick to "look at this, it may be and you must confirm it." Any program that says "100% outperforms doctors" is bullshit and dangerous.
Who said that this isn't the planned use case? The article is reporting on the results of a test, not suggesting that AI can replace doctors.
That's nonsense.
A CPU 100% outperforms a Mathematician, a crane 100% outperforms the strongest human and a shovel can dig faster than your hands. Radar, lidar, optics, etc are all technologies that perform well beyond human capabilities.
Robotic surgery 100% outperforms doctors. Medical imaging 100% outperforms human doctors. Having a model that can interpret the images better than people isn't at all surprising or dangerous.
It's only the fact that you've implied that this will replace doctors that make it sound scary. But that implication isn't supported by facts.
can you give an example of robotic surgery done independently by a machine and not a doctor???
All the previous examples were things operated by humans: shovel, crane, even the robotic surgery.
I am sure we can teach AI to do some or all of these someday, but demanding an example for one of them as completely autonomous makes it seem like you aren't paying attention, aren't participating in the discussion in good faith, and are just fishing for a "gotcha!" moment.
That's why you are getting downvotes, in case you are curious.
If you do have a good faith argument, clarifying it might get people to listen to you and consider it.
I read it as replace doctors but yeah. I mean even current crappy ai chatbots can increase the productivity of a human. Granted we have thinned our systems in prioritizing efficiency over quality that Im not sure if we will see much of an effect till we have a society were people are relatively satisfied with its functioning.
Why?
Basic safety that should be heavily regulated to prevent medical errors?
I know we live in the age of JavaScript where we don't give a fuck about quality anymore but it shouldn't be encouraged.
I'm both amused and mildly offended by the latter part of this comment.
... Well done, 10/10, no notes.
Because, even today, you can’t and will never have a 100% reliable answer.
You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).
This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.
ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.
Let’s say we have a group of 10 people. 7 with cancer, 3 without.
If the AI detects cancer in 6 out of the 7, that’s a success of 86%.
If the AI detects cancer in 2 of the 3 healthy people, that’s a success of 100%.
So, operating the healthy ones always leads to a success and AI is trained by success. That’s why a human should look at the scans too, for now.
for now and always. medicine is something you dont want to entrust to automation.
Well, theoretically, an organism is nothing but a system running fully automatically. So I can see the possibility to have it fixed by another system. In the meantime, AI should support doctors, by making the invisible visible.