this post was submitted on 12 Feb 2026
74 points (100.0% liked)
technology
24231 readers
147 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
I could see it implemented in research (covid vaccine for one it helped) and to detect some radiological early detection but something as sensitive as surgery in this stage?
Humans can handle error with research in data with surgery error is bad news
The doctors watching as the janitors mop the blood off the OR floor after I die when the “AI” surgery robot misidentifies my heart as my appendix: just think of all the training data we’re going to get from this!
The acaedmia information economy is basically crypto but instead of GPU sudoku solving money it's meaningless torture data USD.
Free rent, data points, and no more fascism!!!
(
)
Whoops
Even the radiology analysis applications have been largely bullshit. They don't actually outperform humans and most of it is due to using heavily biased datasets and poorly tuned black box models. At its base, a "modern" model doing this kind of thing is learning patterns. The pattern learned may not be the thing you're actually trying to recognize. It may instead notice that, say, an image was taken with contrast and that statically increases the chance that the image will contain the issue of interest, as contrast imaging is done when other issues are already suspected. But it didn't "see" the problem in the image, it learned the other thing, so now it thinks that most people getting contrast imaging have cancer or whatever
similar example that I like to share with people, spoiler tag cos of length
I remember reading about early models used by the US(?) military to identify camouflaged tanks. They posed their own tanks, took the photos, and ran the 'photo with tank' and 'photo without tank' reinforcement and rewarded it when it correctly identified a tank.
They ran the models and found that the 'AI' was able to identify images with a disguised tank essentially 100% of the time. It was astounding, so they ran more tests, they then discovered that the 'AI' could identify images where the tank was completely obscured nearly 100% of the time, too.
They celebrated thinking that their model was so advanced it had developed x-ray vision or something. So they ran more tests and discovered that no, the 'AI' wasn't identifying the tank at all. What had happened was that there was a week between the days when they had taken the two data photo sets.
For argument's sake, the day that they took the 'No tank' photos it was sunny, and the day they took the 'Camouflaged tank' photos it was slightly overcast. The AI was picking up on the weather/lighting differences and identifying overcast days as 'hidden tank' 100% of the time. Basically 'AI' makes the shortest inference between the data set and the reinforced outcome, which results in shortcuts that fool the human testers.
It's a bit like how geoguessers like rainbolt can tell they're in xyz province of myanmar because of the lense grime on the google van.