487
submitted 1 week ago by misk@sopuli.xyz to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] anon_8675309@lemmy.world 90 points 6 days ago

Did anyone believe they had the ability to reason?

[-] LodeMike@lemmy.today 37 points 6 days ago
[-] Aeri@lemmy.world 30 points 6 days ago

People are stupid OK? I've had people who think that it can in fact do math, "better than a calculator"

[-] Furbag@lemmy.world 11 points 5 days ago

Like 90% of the consumers using this tech are totally fine handing over tasks that require reasoning to LLMs and not checking the answers for accuracy.

load more comments (13 replies)
[-] Halcyon@discuss.tchncs.de 44 points 6 days ago

They are large LANGUAGE models. It's no surprise that they can't solve those mathematical problems in the study. They are trained for text production. We already knew that they were no good in counting things.

[-] Flocklesscrow@lemm.ee 25 points 6 days ago

"You see this fish? Well, it SUCKS at climbing trees."

[-] zbyte64@awful.systems 5 points 5 days ago

That's not how you sell fish though. You gotta emphasize how at one time we were all basically fish and if you buy my fish for long enough, those fish will eventually evolve hands to climb!

load more comments (1 replies)
[-] CombatWombat1212@lemmy.ml 56 points 6 days ago

So do I every time I ask it a slightly complicated programming question

[-] Saik0Shinigami@lemmy.saik0.com 19 points 6 days ago

And sometimes even really simple ones.

[-] werefreeatlast@lemmy.world 8 points 6 days ago

How many w's in "Howard likes strawberries" It would be awesome to know!

[-] Saik0Shinigami@lemmy.saik0.com 9 points 6 days ago* (last edited 6 days ago)

So I keep seeing people reference this... And I found it curious of a concept that LLMs have problems with this. So I asked them... Several of them...

Outside of this image... Codestral ( my default ) got it actually correct and didn't talk itself out of being correct... But that's no fun so I asked 5 others, at once.

What's sad is that Dolphin Mixtral is a 26.44GB model...
Gemma 2 is the 5.44GB variant
Gemma 2B is the 1.63GB variant
LLaVa Llama3 is the 5.55 GB variant
Mistral is the 4.11GB Variant

So I asked Codestral again because why not! And this time it talked itself out of being correct...

Edit: fixed newline formatting.

load more comments (4 replies)
load more comments (4 replies)
[-] N0body@lemmy.dbzer0.com 56 points 6 days ago

The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding "seemingly relevant but ultimately inconsequential statements" to the questions

Good thing they're being trained on random posts and comments on the internet, which are known for being succinct and accurate.

[-] blind3rdeye@lemm.ee 23 points 6 days ago

Yeah, especially given that so many popular vegetables are members of the brassica genus

[-] MoogleMaestro@lemmy.zip 7 points 6 days ago

Absolutely. It would be a shame if AI didn't know that the common maple tree is actually placed in the family cannabaceae.

load more comments (1 replies)
[-] VantaBrandon@lemmy.world 4 points 5 days ago

Definitely true! And ordering pizza without rocks as a topping should be outlawed, it literally has no texture without it, any human would know that very obvious fact.

[-] nutsack@lemmy.world 32 points 6 days ago

cracks? it doesn't even exist. we figured this out a long time ago.

[-] emerald@lemmy.blahaj.zone 45 points 6 days ago

statistical engine suggesting words that sound like they'd probably be correct is bad at reasoning

How can this be??

[-] Siegfried@lemmy.world 19 points 6 days ago

I would say that if anything, LLMs are showing cracks in our way of reasoning.

[-] MoogleMaestro@lemmy.zip 13 points 6 days ago

Or the problem with tech billionaires selling "magic solutions" to problems that don't actually exist. Or how people are too gullible in the modern internet to understand when they're being sold snake oil in the form of "technological advancement" when it's actually just repackaged plagiarized material.

load more comments (1 replies)
load more comments (3 replies)
[-] KingThrillgore@lemmy.ml 27 points 6 days ago* (last edited 6 days ago)

I feel like a draft landed on Tim's desk a few weeks ago, explains why they suddenly pulled back on OpenAI funding.

People on the removed superfund birdsite are already saying Apple is missing out on the next revolution.

[-] BreadstickNinja@lemmy.world 16 points 6 days ago

"Superfund birdsite" I am shamelessly going to steal from you

load more comments (1 replies)
[-] whotookkarl@lemmy.world 17 points 6 days ago* (last edited 6 days ago)

Here's the cycle we've gone through multiple times and are currently in:

AI winter (low research funding) -> incremental scientific advancement -> breakthrough for new capabilities from multiple incremental advancements to the scientific models over time building on each other (expert systems, LLMs, neutral networks, etc) -> engineering creates new tech products/frameworks/services based on new science -> hype for new tech creates sales and economic activity, research funding, subsidies etc -> (for LLMs we're here) people become familiar with new tech capabilities and limitations through use -> hype spending bubble bursts when overspend doesn't keep up with infinite money line goes up or new research breakthroughs -> AI winter -> etc...

[-] RaoulDook@lemmy.world 20 points 6 days ago

I hope this gets circulated enough to reduce the ridiculous amount of investment and energy waste that the ramping-up of "AI" services has brought. All the companies have just gone way too far off the deep end with this shit that most people don't even want.

[-] thanks_shakey_snake@lemmy.ca 18 points 6 days ago

People working with these technologies have known this for quite awhile. It's nice of Apple's researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.

load more comments (2 replies)
[-] sircac@lemmy.world 15 points 6 days ago

They predict, not reason....

[-] FlyingSquid@lemmy.world 4 points 5 days ago* (last edited 5 days ago)

The part of the study where they talk about how they determined the flawed mathematical formula it used to calculate the glue-on-pizza response was mindblowing.

^(I^ ^did^ ^not^ ^read^ ^the^ ^study.)^

[-] WrenFeathers@lemmy.world 9 points 6 days ago

Someone needs to pull the plug on all of that stuff.

load more comments
view more: next ›
this post was submitted on 15 Oct 2024
487 points (96.6% liked)

Technology

58795 readers
3461 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS