this post was submitted on 12 Feb 2026
74 points (100.0% liked)

technology

24230 readers
117 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] Chana@hexbear.net 23 points 1 day ago

Machine learning etc could be great for medicine but the profit motive means that the best bullshitter will get these early contracts. The best bullshitter has an actual product and it looks great in demos but they cut corners so its fuckups are difficult to audit and were swept under the rug when the few engineers that understood the problem raised the issue.

With very carefully collected and annotated datasets and heuristic guard rails this stuff could actually be great. But those things are slow and expensive.

[–] miz@hexbear.net 30 points 1 day ago (1 children)

I uhhhh god damn I am starting to get a real phobia about being operated on

[–] ElChapoDeChapo@hexbear.net 28 points 1 day ago

I for one am starting to look into medical tourism, I hear doctors are so cheap in China that your trip will practically pay for itself and you get a trip to China

[–] SchillMenaker@hexbear.net 29 points 1 day ago

Look, nobody could have seen this coming, least of all the AI, it's terrible at figuring things out. So anyway we're firing more doctors.

[–] Awoo@hexbear.net 19 points 1 day ago (2 children)

Identifying body parts inside the body seems like exactly the kind of thing AI should NOT be doing, they all look the same. It takes a lot of training and experience to be able to identify them correctly.

[–] bloopything@hexbear.net 7 points 1 day ago

As user Chana said it makes some sense just not under a for profit system, it can difficult to tell one red mass of flesh on a screen from another, so in theory computer vision could provide a "second pair of eyes" for verification, but that's only if it was actually implemented correctly with the proper testing and training, which it wont be until it's costing hospitals more in lawsuits than it gets from kickbacks

[–] Owl@hexbear.net 3 points 1 day ago

It's also exactly the kind of thing that, using the methodologies accepted by the "AI" industry, AI will beat human professionals at easily every time! (Clown industry with clown methodologies.)

[–] vegeta1@hexbear.net 21 points 1 day ago* (last edited 1 day ago) (2 children)

I could see it implemented in research (covid vaccine for one it helped) and to detect some radiological early detection but something as sensitive as surgery in this stage? thats-why-im-confused Humans can handle error with research in data with surgery error is bad news

[–] VILenin@hexbear.net 19 points 1 day ago (2 children)

The doctors watching as the janitors mop the blood off the OR floor after I die when the “AI” surgery robot misidentifies my heart as my appendix: just think of all the training data we’re going to get from this!

[–] WokePalpatine@hexbear.net 11 points 1 day ago

The acaedmia information economy is basically crypto but instead of GPU sudoku solving money it's meaningless torture data USD.

[–] Commiechameleon@hexbear.net 4 points 1 day ago

Free rent, data points, and no more fascism!!!

( you-died-1 you-died-2 )

Whoops

[–] Chana@hexbear.net 10 points 1 day ago (1 children)

Even the radiology analysis applications have been largely bullshit. They don't actually outperform humans and most of it is due to using heavily biased datasets and poorly tuned black box models. At its base, a "modern" model doing this kind of thing is learning patterns. The pattern learned may not be the thing you're actually trying to recognize. It may instead notice that, say, an image was taken with contrast and that statically increases the chance that the image will contain the issue of interest, as contrast imaging is done when other issues are already suspected. But it didn't "see" the problem in the image, it learned the other thing, so now it thinks that most people getting contrast imaging have cancer or whatever

[–] KuroXppi@hexbear.net 9 points 1 day ago

similar example that I like to share with people, spoiler tag cos of length

I remember reading about early models used by the US(?) military to identify camouflaged tanks. They posed their own tanks, took the photos, and ran the 'photo with tank' and 'photo without tank' reinforcement and rewarded it when it correctly identified a tank.

They ran the models and found that the 'AI' was able to identify images with a disguised tank essentially 100% of the time. It was astounding, so they ran more tests, they then discovered that the 'AI' could identify images where the tank was completely obscured nearly 100% of the time, too.

They celebrated thinking that their model was so advanced it had developed x-ray vision or something. So they ran more tests and discovered that no, the 'AI' wasn't identifying the tank at all. What had happened was that there was a week between the days when they had taken the two data photo sets.

For argument's sake, the day that they took the 'No tank' photos it was sunny, and the day they took the 'Camouflaged tank' photos it was slightly overcast. The AI was picking up on the weather/lighting differences and identifying overcast days as 'hidden tank' 100% of the time. Basically 'AI' makes the shortest inference between the data set and the reinforced outcome, which results in shortcuts that fool the human testers.

It's a bit like how geoguessers like rainbolt can tell they're in xyz province of myanmar because of the lense grime on the google van.


[–] SorosFootSoldier@hexbear.net 24 points 1 day ago (1 children)

whywhywhywhywhy in god's name are you putting ai into something as sensitive and life and death as surgery!!!?

[–] DasRav@hexbear.net 15 points 1 day ago

Ah great, fresh and entirely preventable horrors. I can't wait for a real doctor to even glance at your problems being a double super premium service only for CEOs and their pedophile friends.

[–] context@hexbear.net 20 points 1 day ago

well at least now i know how i'll die

[–] DwigtRortugal@hexbear.net 9 points 1 day ago (1 children)

was it not bad enough seeing an open webmd tab in the exam room at your gp's office?

[–] fox@hexbear.net 3 points 1 day ago

You know the story about the $1,000 chalk mark on the generator?

[–] Philosoraptor@hexbear.net 7 points 1 day ago

Shoving AI into everything, including the base of your skull.

[–] KuroXppi@hexbear.net 7 points 1 day ago

Noped out of the article after they started describing some of the medical incidents. Terrifying