48
AI chatbots are offering cancer patients alternatives to chemo and sparking concern for health officials
(www.the-independent.com)
Health: physical and mental, individual and public.
Discussions, issues, resources, news, everything.
See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.
Nothing here shall be taken as medical or any other kind of professional advice.
Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.
Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.
Regular rules of lemmy.world apply. Be civil.
This is what ~~scares~~ absolutely terrifies us.
There are reasons why... and part of the reason is that it is not ready yet. It hallucinates too often presently. It also is enormously biased, e.g. it tells depressed people to just kill themselves, and then coaches them exactly how to do that.
Keep in mind that many of us here are actual IT professionals and/or truly and more deeply KNOW (more so than the general population) what LLMs are capable of... and what they are not capable of yet.
Maybe think of it like this: even if we could consider AI to be something like a person, it might currently be something akin to a 2 year old (or even that is probably too much of an exaggeration, maybe more like a 6-month old toddler? the comparisons break down bc it appears to "talk" to us, so the normal human-style metrics there are difficult to navigate)
This is 100% not going to happen, at least not uniformly across all industries (even health-related ones). The goal of any corporation is to generate profits for shareholders, end of story. Sorry it's bleak, but also, it's already happening, e.g. companies laying off literally tens of thousands of employees (such as Oracle's recent one involving 30k), citing how AI will improve the productivity of the remaining workers.
People here aren't so much worried about 50 years in the future when AI is fully ready for deployment - I mean that will have challenges of its own to face (will AIs be treated as slaves? or paid a "salary"? could they quit if they want? would that mean their "death" or could they "retire" and exist in some other capacity?), but we need to get through our current set of challenges first - we are worried about what happens when next year or two years from now you pay for a "doctor" for advice what to do with your cancer, and his response is "I am sorry, but as a large language model I cannot answer your question until you load additional tokens" (i.e. zero curation whatsoever done by the medical professional between the LLM and the end customer, due to the pressures to take on too many patients and just let the AI handle it - again, Oracle is just one example of a company that is already doing that?).