398
submitted 9 months ago by ElCanut@jlai.lu to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] Ludrol@szmer.info 3 points 9 months ago* (last edited 9 months ago)

I have read some headline that said that some of these models just measure age of a patient and a quality of the machine making photos.

[-] MalReynolds@slrpnk.net 18 points 9 months ago

I have read some headline

Really.

[-] Daxtron2@startrek.website 8 points 9 months ago

Says all you need to know about their opinion lol

[-] Ludrol@szmer.info 6 points 9 months ago

Still AI misalignment is a real issue. I just don't remember which model was studied and had been found out that it was missaligned.

[-] Daxtron2@startrek.website 4 points 9 months ago

That and bias, absolutely need improvements. That doesn't mean LLMs can't be extremely effective if given appropriate tasks. The problem is that the people who make decisions about where they're used aren't technical enough to understand their strengths and limitations

[-] intensely_human@lemm.ee 1 points 9 months ago

I don’t think technical knowledge gives as good a sense as a lot of experience working with one.

Like saying the guys who designed a particular car would know best how it’ll perform on various racetracks. My sense is a driver would have a better sense.

[-] Daxtron2@startrek.website 2 points 9 months ago

I guess what I meant by technical knowledge meant to be less about general tech and more about specifically LLM tech

[-] Kichae@lemmy.ca 9 points 9 months ago

Eh. Depends on which tech is being used and how. For a lot of things, relatively basic ML models purposefully trained do a pretty good job, and are, in fact, limited by the diagnoses in the training data. But more generalized "AI" tools seem rather... questionable.

Like, you can train a SVM on fMRIs to compare structures in the brain between patients diagnosed with bipolar disorder and those that are not diagnosed with it, and it will have an accuracy rate on new patients basically equal to the accuracy rate of the doctors who did the diagnosing in the training set. But you'll have a much harder time creating a model that takes in fMRIs and reports back answers to the question of "which brain disease or abnormality do I have?"

This stuff works much closer to advertised when it's narrowly defined and purpose built, but the people making and funding this work want catch-all doctor replacements, because of course they do, because there's way more money in charging hospitals and patience 10% less than a doctor's salary than there is in providing tools that make doctors' efforts in diagnosing specific illnesses easier.

Or, at least there is if you can pull it off.

[-] rho50@lemmy.nz 1 points 9 months ago

Precisely. Many of the narrowly scoped solutions work really well, too (for what they're advertised for).

As of today though, they're nowhere near reliable enough to replace doctors, and any breakthrough on that front is very unlikely to be a language model IMO.

[-] Kichae@lemmy.ca 2 points 9 months ago

And they should no more replace doctors in the future than x-ray machines did in the past. We should never want them to.

this post was submitted on 26 Mar 2024
398 points (100.0% liked)

Technology

37799 readers
261 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS