179
you are viewing a single comment's thread
view the rest of the comments

This fuckgin stupid quantum computer can't even solve math problems my classical von Neumann architecture computer can solve! Hahahah, this PROVES computers will never be smart. Only I am smart! The computer doesn't even possess a fraction of my knowledge of anime!!

[-] plinky@hexbear.net 20 points 1 week ago

in rapidly deteriorating ecology throwing 300 billion per year in this tech of turning electricity into heat does seem ill-advised, yes.

then I guess it's a good thing that in addition to producing humorous output when prompted with problems it's ill suited to solve, it can also pass graduate level examinations and diagnose disease better than a doctor.

[-] plinky@hexbear.net 24 points 1 week ago

Amazing, it can pass tests which it churned through 1000 times but cannot produce simple answer a child might stumble through. It's not cognition, it's regurgitation. You do get diagnosed at llm-shop mate, have fun

Yeah you're right! What use is having the entirety of medical knowledge in every language REGURGITATED at you in a context aware fashion to someone who can't afford a doctor? After all it's not cognition in the same way that I do it.

How many shitty doctors getting nudged towards a better outcome for real people does this tech need to demonstrate to offset the OCEAN BOILING costs of this tech do you think?

[-] plinky@hexbear.net 15 points 1 week ago* (last edited 1 week ago)

at least 3 millions.

Cite your sources mate, ai driven image recognition of lung issues is kind of a semi-joke in the field.

Majority of shit health outcomes is not missing esoteric cancer on an image, it's an overworked nurse missing bone fracture, it's not getting urea/blood analysis done in time, it's a doctor prescribing antibiotics without pro biotics afterwards, it's a drug being locked by ip in poor country or drug costing too much cause johnson acquisition spent that much money for patent or nuts pricing of clinical trials. Developing new working drug costs like 40 mil, trialing it costs 2 billion in fda. Now you do tell me how ai making 40 mil to 20 mil will make it cheaper.

Majority of healthcare work is, you know, work. Patient care, surgery, not fucking doctor house, md finding right drug. 95 % cases could be solved by honest web md, congrats. who will set your broken arm? Will ai do mri scan of acl? Maybe x-ray? A dipshit can look at an image and say that's wrong, ai can tell you you should put it in a cast and avoid lateral movements for a month, so what then?

This is so off the mark its not worth my time.

[-] MoreAmphibians@hexbear.net 13 points 1 week ago

Can't wait to pick up my prescription for hyperactivated antibiotics.

https://www.cio.com/article/3593403/patients-may-suffer-from-hallucinations-of-ai-medical-transcription-tools.html

How often do you think use of AI improves medical outcomes vs makes them worse? It's always super-effective in the advertising but when used in real life it seems to be below 50%. So we're boiling the oceans to make medical outcomes worse.

To answer your question, AI would need to demonstrate improved medical outcomes at least 50% of the time (in actual use) for me to even consider looking at it being useful.

50% is the number yeah? I wish yall took "no investigation no right to speak" more seriously.

[-] ferristriangle@hexbear.net 9 points 1 week ago

They've provided a source, indicating that they have done investigation into the issue.

The quote isn't "If you don't do the specific investigation that I want you to do and come to the same conclusion that I have, then no right to speak."

If you believe their investigation led them to an erroneous position, it is now incumbent on you to make that case and provide your supporting evidence.

[-] Cysioland@lemmygrad.ml 8 points 1 week ago* (last edited 1 week ago)

Y'all are suffering because of the lack of downvotes, so you need to actually dunk on someone instead of downvoting and moving on

[-] Assian_Candor@hexbear.net 6 points 1 week ago

We need to make a chat gpt powered dunking bot

[-] Cysioland@lemmygrad.ml 2 points 1 week ago

ChatGPT is censored, this calls for some more advanced LLMing, perhaps even a finetune based on the Hexbear comment section argument corpus. It's only ethical if we do it for the purpose of dunking on chuds/libs

[-] KobaCumTribute@hexbear.net 12 points 1 week ago

LLMs are categorically not AI, they're overgrown text parsers based on predicting text. They do not store knowledge, they do not acquire knowledge, they're basically just that little bit of speech processing that your brain does to help you read and parse text better, but massively overgrown and bloated in an attempt to make that also function as a mimicry of general knowledge. That's why they hallucinate and are constantly wrong about anything that's not a rote answer from their training data: because they do not actually have any sort of thinking bits or mental model or memory, they're just predicting text based on a big text log and their prompts.

They're vaguely interesting toys, though not for how ludicrously expensive they are to actually operate, but they represent a fundamentally wrong approach that's receiving an obscene amount of resources to trying to make it not suck without any real results to show for it. The sorts of math and processing involved in how they work internally have broader potential, but these narrowly focused chatbots suck and are a dead end.

These models absolutely encode knowledge in their weights. One would really be showing their lack of understanding about how these systems work to suggest otherwise.

[-] KobaCumTribute@hexbear.net 5 points 1 week ago

Except they don't, definitionally. Some facts get tangled up in them and can consistently be regurgitated, but they fundamentally do not learn or model them. They no more have "knowledge" than image generating models do, even if the image generators can correctly produce specific anime characters with semi-accurate details.

[-] AtmosphericRiversCuomo@hexbear.net 1 points 1 week ago* (last edited 1 week ago)

"Facts get tangled up in them". lol Thanks for conceding my point.

[-] PaX@hexbear.net 4 points 1 week ago

I am begging you to raise your standard of what cognition or knowledge is above your phone's text prediction lmao

Don't be fatuous. See my other comment here: https://hexbear.net/comment/5726976

[-] Hexboare@hexbear.net 12 points 1 week ago
[-] PaX@hexbear.net 9 points 1 week ago* (last edited 1 week ago)

Quantum computers can decide anything that a classical computer can and vice versa, that's what makes them computers lmao

LLMs are not computers and they're not even good "AI"*, they have the same basis as Markov chains. Everything is just a sequence of tokens to them, there is ZERO computation or reasoning happening. The only thing they're good at is tricking people into thinking they are good at reasoning or computing and even that illusion falls apart the moment you ask something obviously immediately true or false and which can't be faked by portioning out some of the input sludge (training data)

It's the perfect system for late capitalism lol, everything else is fake too

*We used to reserve this term for knowledge systems based on actually provable and defeasible reasoning done by computers which..... IS POSSIBLE, it's not very popular rn and often not useful beyond trivial things with current systems but like..... if a Prolog system tells me something is true or false, I know it's true or false because the system proved it ("backwards" usually in practice) based on a series of logical inferences from facts that me and the system hold as true and I can actually look at how the system came to that conclusion, no vibes involved. There is not a lot of development of this type of AI going on these days..... but if you're curious, would rec looking into automated theorem proving cuz that's where most development of uhhhh computable logic is going on rn and it is kinda incredible sometimes how much these systems can make doing abstract math easier and more automatic. Even outside of that, as someone who has only done imperative programming before, it is surreal to watch a Prolog program be able to give you answers to problems both backwards and forwards regardless of what you were trying to accomplish when you wrote the program. Like if you wrote a program to solve a math puzzle, you can also give the solution and watch the program give possible problems that could result in that solution :3 and that's barely even the beginning of what real computer reasoning systems can do

this post was submitted on 10 Dec 2024
179 points (99.4% liked)

chapotraphouse

13594 readers
536 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS