this post was submitted on 01 May 2026
174 points (90.3% liked)
PC Gaming
14586 readers
924 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sure, tools make people worse at doing the thing without tools.
Using AutoCAD made draftsmen worse at drafting, that doesn't matter because there is no occasion where you need to draft complex plans without a computer. If AI diagnosis makes doctors worse at reading MRIs... that would only matter in a world where they're reading MRIs but also don't have access to a computer. There is no hospital that has a functional MRI machine that wouldn't be able to access these tools.
The important thing is that the doctors, when using these AI tools, are measurably more effective. The result is the thing that matters for public health, not any individual's ability to operate without their tools.
https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-ai-detects-pancreatic-cancer-up-to-3-years-before-diagnosis-in-landmark-validation-study/
Doubling the early detection rate of one of the most deadly types of cancers will result in many more lives being saved.
That's not AI. Those algorithms were pattern matching developed at Carnegie Mellon 15 years ago. so now they want to call it AI.
Radiologist and pathologist have always had a massive error rate because of human cognition bias.
Machine Learning isn't restricted to neural networks.
Seems like that's a bad thing and we should be happy that there are tools which improve their accuracy.
Are you confident that the American healthcare system wouldn't declare experts to be a redundancy and simply replace them with the AI? Not only would that fit with their well-known profit motive, it is explicitly what AI companies claim they want to do.
I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.
That would lead to a legal liability. The reality is all radiology scans and pathology slide images are cross checked by software and if there is a discrepancy, another pathologist is consulted. This is because the error rate of pathologists and radiologists is conservatively 1% which is far too high.
Yes.
Nothing about this tool replaces experts any more than a calculator or computer can replace a human mathematician.
I don't assume that AI will always be used ethically (see: War, LLM propaganda bots, etc). Like every technology it is possible to do bad things with it and it will require regulations and laws addressing this.
Dismissing a technology because it is used by bad people, if you actually applied that standard consistently in your life, would have you living naked in a cave without access to fire or tools.
You don't need to believe in a utopia to understand that a world where 70% of pancreatic cancer is detected 3 years earlier is better than one where 30% of pancreatic cancer is detected 3 years earlier.
FauxLiving, I appreciate your guarantees about the future, but can you demonstrate why the for-profit medical and AI industries wouldn't cut corners if the AI behaved the way you hope it will?
First, this is a peer-reviewed result not me expressing my hopes.
Second, this application does not replace radiologists. It is a tool for radiologists in one specific type of diagnosis.
If you have some hypothetical future outcome in mind, then the burden of proof is on you to prove your position, not on me to disprove it.
The data shows that this system works.
FauxLiving, a paper isn't the same as you predicting the future. It does not show the system works. It is disgusting that you pretend to be an astrologer based on these opinions.