this post was submitted on 23 Mar 2025
93 points (89.1% liked)

Futurology

2346 readers
92 users here now

founded 2 years ago
MODERATORS
top 28 comments
sorted by: hot top controversial new old
[–] peoplebeproblems@midwest.social 24 points 2 days ago

This is what AI should be used for. Not the generative crap ChatGPT peddles.

AI is perfect for applications looking at tons of different variables for specific patterns and are capable of being trained on new data cheaper than training every doctor in the country.

A doctor's first and primary goal is keeping a patient alive. Second is to normalize quality of life. Third is to minimize suffering when possible.

There is a HUGE and artificial shortage of doctors and healthcare providers in this country, and largely the world. They honestly don't have enough time to review every patient record, symptoms, and make a diagnosis and treatment plan, THEN do their continuing education and licensing requirements, AND do any research if they are mandated to do so by their employer, AND if they are at a teaching hospital - teach.

These AI tools can look at an entire medical record, symptoms, laboratory results, and pathology images and make a very accurate diagnosis that is always run by a physician before making a determination. AI doesn't forget what it's learned either.

[–] heavydust@sh.itjust.works 38 points 2 days ago (2 children)

In the medical industry, AI should stick to "look at this, it may be and you must confirm it." Any program that says "100% outperforms doctors" is bullshit and dangerous.

[–] FauxLiving@lemmy.world 12 points 2 days ago (1 children)

In the medical industry, AI should stick to "look at this, it may be and you must confirm it."

Who said that this isn't the planned use case? The article is reporting on the results of a test, not suggesting that AI can replace doctors.

Any program that says "100% outperforms doctors" is bullshit and dangerous.

That's nonsense.

A CPU 100% outperforms a Mathematician, a crane 100% outperforms the strongest human and a shovel can dig faster than your hands. Radar, lidar, optics, etc are all technologies that perform well beyond human capabilities.

Robotic surgery 100% outperforms doctors. Medical imaging 100% outperforms human doctors. Having a model that can interpret the images better than people isn't at all surprising or dangerous.

It's only the fact that you've implied that this will replace doctors that make it sound scary. But that implication isn't supported by facts.

[–] HubertManne@piefed.social -1 points 2 days ago (1 children)

can you give an example of robotic surgery done independently by a machine and not a doctor???

[–] mrcleanup@lemmy.world 3 points 2 days ago (1 children)

All the previous examples were things operated by humans: shovel, crane, even the robotic surgery.

I am sure we can teach AI to do some or all of these someday, but demanding an example for one of them as completely autonomous makes it seem like you aren't paying attention, aren't participating in the discussion in good faith, and are just fishing for a "gotcha!" moment.

That's why you are getting downvotes, in case you are curious.

If you do have a good faith argument, clarifying it might get people to listen to you and consider it.

[–] HubertManne@piefed.social 1 points 2 days ago

I read it as replace doctors but yeah. I mean even current crappy ai chatbots can increase the productivity of a human. Granted we have thinned our systems in prioritizing efficiency over quality that Im not sure if we will see much of an effect till we have a society were people are relatively satisfied with its functioning.

[–] meliante@lemm.ee 6 points 2 days ago (3 children)
[–] heavydust@sh.itjust.works 27 points 2 days ago (2 children)

Basic safety that should be heavily regulated to prevent medical errors?

I know we live in the age of JavaScript where we don't give a fuck about quality anymore but it shouldn't be encouraged.

[–] clif@lemmy.world 6 points 2 days ago

I'm both amused and mildly offended by the latter part of this comment.

... Well done, 10/10, no notes.

[–] iii@mander.xyz 2 points 2 days ago

npm install cancerDiagnosis

[–] Enoril@jlai.lu 13 points 2 days ago

Because, even today, you can’t and will never have a 100% reliable answer.

You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).

This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.

ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.

[–] zr0@lemmy.dbzer0.com 4 points 2 days ago (1 children)

Let’s say we have a group of 10 people. 7 with cancer, 3 without.

If the AI detects cancer in 6 out of the 7, that’s a success of 86%.

If the AI detects cancer in 2 of the 3 healthy people, that’s a success of 100%.

So, operating the healthy ones always leads to a success and AI is trained by success. That’s why a human should look at the scans too, for now.

[–] reksas@sopuli.xyz 4 points 2 days ago (1 children)

for now and always. medicine is something you dont want to entrust to automation.

[–] zr0@lemmy.dbzer0.com 2 points 2 days ago

Well, theoretically, an organism is nothing but a system running fully automatically. So I can see the possibility to have it fixed by another system. In the meantime, AI should support doctors, by making the invisible visible.

[–] toodd@lemmy.blahaj.zone 20 points 2 days ago (1 children)

I really wish “AI” would die; machine vision and convolutional neural networks used in this application don’t have much to do with the large language models most people think of with the modern incarnation of the term ai

[–] iii@mander.xyz 2 points 2 days ago

don’t have much to do with the large language models

On a technical level I disagree: they're only using one convolution layer. The biggest change compared to previous work on the same dataset is the gated MLP, which is an idea that's inspired by transformers (1), which in their turn created the LLM that are hyped.

In general, I agree that AI is a useless marketing term.

[–] iii@mander.xyz 19 points 2 days ago

Here's the paper: https://www.sciencedirect.com/science/article/pii/S2666990025000059?via=ihub

The confusion matrix and ROC curve are in section 5.2.

The image processing pipeline includes techniques from the 00s (in preprocessing such as otsu and watershed), to quite recent (gated MLP "transformers light").

[–] LeFrog@discuss.tchncs.de 11 points 2 days ago* (last edited 2 days ago) (2 children)

I am able to identify 100% of cancer: just say "It is cancer" to each picture.

~~The article does not mention any other metrics than detection rate. What about recall etc.? Without it, this news is basically worthless.~~

I stand corrected, see the comments below. While the article still lacks important context, accuracy is well defined for this topic.

[–] iii@mander.xyz 20 points 2 days ago* (last edited 2 days ago) (1 children)

Accuracy in a classification context is defined as (N correct classifications / total classifications). So classifying everything as cancer would, in a balanced dataset, give you ~50% accuracy.

This article is indeed badly written PR fluff. I linked the paper in a sister comment. Both the confusion matrix and the ROC curve look phenomenal. Train/test/validation split seems fine too, as do the training diagnostics, so I'm optimistic that it isn't a case of overfitting.

Ofcourse 3rd party replication would be welcome, and I can't speak to the medical relevanve of the dataset. But the computer vision side of things seems well executed.

[–] LeFrog@discuss.tchncs.de 5 points 2 days ago

Thx for the comment! I edited my post accordingly.

[–] stray@pawb.social 7 points 2 days ago (2 children)

with an impressive 99.26% accuracy.

I feel this would be a blatant lie if it included a bunch of false positives.

https://mander.xyz/comment/17810389

While keeping the FPR low, our model keeps the TPR high, showing that it can accurately find real cases while reducing false alarms.

I'm not educated enough to know what recall means in this context, but there's tables with percentages for it in the page. (Would love an explanation; I'm not sure what to search for to get the right definition.)

[–] iii@mander.xyz 3 points 2 days ago* (last edited 2 days ago)

I'm not educated enough to know what recall means in this context

This wiki describes the terminology for a binary classification. I always have to refer to that page too, as it's very confusing :)

[–] LeFrog@discuss.tchncs.de 2 points 2 days ago

Thx for the comment! I edited my post accordingly.

[–] nthavoc@lemmy.today 5 points 2 days ago* (last edited 2 days ago)

From the article: Of course, it's not a tool designed to replace medical professionals but to be used in collaboration with cancer specialists to accurately spot the disease and then monitor how successful treatment has been. What's more, this kind of model is a much more rapid, accessible and affordable way to diagnose cancers.

This is the key difference and how AI should be used. It doesn't replace the human but effectively aids them in their research. The whole "outperfoming doctors" pitch needs to change to "Reducing critical misses for doctors." Otherwise it gets roped into the ChatGPT-like AI's which are absolutely garbage for decision making.

[–] match@pawb.social 8 points 2 days ago

one of the particularly good uses for AI! in fact it's so good and cheap that it'd actually be hard to turn a lot of profit on! which... hm....

[–] jimmy90@lemmy.world 4 points 2 days ago

this is not new

https://pmc.ncbi.nlm.nih.gov/articles/PMC10217496/

this AI is new though, but is it better?

[–] Wilco@lemm.ee -3 points 2 days ago (1 children)

If I state "every living creature that ever existed or will ever exist had, has, or will have cancer" I just diagnosed all the cancer in existence ... including cancer thousands of years from now. That is a 100% diagnosis rate.

But what would be the error rate?

The accuracy is provided if you read the article, the paper is also linked