this post was submitted on 07 Mar 2026
610 points (98.6% liked)

Technology

82518 readers
3919 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

you are viewing a single comment's thread
view the rest of the comments
[–] CileTheSane@lemmy.ca 1 points 1 day ago (1 children)

they have good declaritive knowledge

No. They don't. They are good at making declarative statements.
That's not the same thing.

Every day you also probably see a new post of humans being blatantly wrong, does that mean humans can't know things?

I fully agree that asking a random human for help with something is just as effective as asking an LLM to help with something.

If I need to know something (like who was the first president of the United States) I will not go outside and ask a random human, I will ask a trustworthy source.
If I need some code written I won't have a random human do it, I will interview people to find someone capable.
If I need someone to interact with customers I won't let some random human come in and do it.

[–] Not_mikey@lemmy.dbzer0.com 0 points 1 day ago (1 children)

They are good at making declarative statements.
That's not the same thing.

What's the difference between making correct declaritive statements and having declaritive knowledge? If I am able to accurately state every president of the US, wouldn't you say I have knowledge of the list of US presidents? The only way you can judge my declaritive knowledge of something is by my ability to make accurate declaritive statements, that's what a test is. If making accurate declaritive statements is not the measure of declaritive knowledge then what is?

An LLM will give more accurate declaritive statements on more question then any human can, would that not mean that an LLM has more declaritive knowledge than any human? So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?

[–] CileTheSane@lemmy.ca 1 points 1 day ago (1 children)

An LLM will give more accurate declaritive statements on more question then any human can

Not if you include "I don't know" as an accurate statement or penalize the score for incorrect declarative statements.

So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?

I would absolutely trust the random human more because they're not going to make shit up if they don't know. It will either be "I don't know" or "I would guess" to make it clear they aren't confident. The LLM will give me a declarative answer but I have no fucking clue if it's accurate or an "hallucination" (lie). I'll need to do what I should have done in the first place and ask a search engine to make sure.

[–] Not_mikey@lemmy.dbzer0.com -1 points 1 day ago (1 children)

I think you are underestimating how accurate LLMs are because you probably don't use them much, and only see there mistakes posted for memes. No one's going to post the 99 times an LLM gives the correct answer, but the one time it says to put glue on pizza it's going to go viral. So if your only view on LLM output is from posts, you're going to think it's way worse than it is.

Even if you mark it down for incorrect answers it's still going to beat most people. An LLM can score in the 90th percentile in the SAT, and around the 80th percentile in the LSAT. If you take into account that people taking those tests are more prepared for them then the general population they're probably in the 99th percentile. It doesn't matter if you mark wrong answers negative if it's getting 95% of the answers correctly and your average percent is getting 50% of the answers correctly.

People guess things too and will also state things confidently that they don't completely know. If a person has a little bit of knowledge on a subject they are likely to give confidently wrong answers due to the dunning Krueger effect. If you pick a random person you're probably just as likely to get one of these people as you are that the LLM is wrong. So is it more useful to ask something that has a 95% chance to be correct, and 5% chance to be confidently wrong, or ask a person who has a 50% chance of being correct, that includes those who guessed correctly, 5% chance of being confidently wrong and a 45% chance of saying I don't know.

If you're doubting my percentages on the accuracy of LLMs I'd encourage you to test them yourself. See if you can stump it on declaritive knowledge, it's harder than the posts make it seem.

[–] CileTheSane@lemmy.ca 1 points 11 hours ago (1 children)

I think you are underestimating how accurate LLMs are because you probably don't use them much, and only see there mistakes posted for memes. No one's going to post the 99 times an LLM gives the correct answer, but the one time it says to put glue on pizza it's going to go viral. So if your only view on LLM output is from posts, you're going to think it's way worse than it is.

And look at what is on my feed just this morning: https://lemmy.world/post/44099386

It's not just that LLMs are shit. It's that people trust them way too much and are shocked when the predictable happens.

Even if you mark it down for incorrect answers it's still going to beat most people. An LLM can score in the 90th percentile in the SAT, and around the 80th percentile in the LSAT.

And of course the AI bro goes for the "vibes" argument. You can't just state that as true without providing a source. Or did AI tell you it was true?

For example: fewer than 10% of tested AIs consistently properly answered that you need to drive to a car wash in order to wash your car: https://opper.ai/blog/car-wash-test

That's a question so far below anything on the SAT or LSAT and 90% of LLMs can't even get that right.

If you're doubting my percentages on the accuracy of LLMs I'd encourage you to test them yourself.

I've tried using LLMs. I don't use them for research, because why the fuck would I? Better, more efficient tools already exist for that. When I had something that a search engine can't help me with and LLMs are apparently "good at" it immediately proved itself to be worthless.

[–] Not_mikey@lemmy.dbzer0.com 0 points 8 hours ago (1 children)

Here's the source it's from open AI but it is peer reviewed. Here's another source that uses it as a baseline to compare the relative scores and according to the tables in 2023 it got a 610, putting it around the 75th percentile, and that's just for math which the open AI study showed it did about 5% worse then it's average so ~80th percentile for a total score. Again this is for students who are usually more prepared for the SAT than the general population, so it's still probably in the 90th percentile for the general population.

Again for the car wash example that is not declaritive knowledge, like the pizza glue that is knowledge derived from experience and reason which I've said that LLMs aren't the best at. The fact that they had to make a riddle for the AI to trip it up if anything shows how good it is. If it was as bad as you say it is then anyone could easily trip it up and get it to give a wrong answer and a study like that wouldn't be relevant. Seriously if you think the LLM is so inaccurate, come up with your own test to stump it, it should be easy by the way you talk about them.

[–] CileTheSane@lemmy.ca 1 points 8 hours ago

The fact that they had to make a riddle for the AI to trip it up

"I want to take my car to the car wash, should I walk or drive" is not a riddle. It requests basic understanding of what is being asked.