this post was submitted on 07 Mar 2026
610 points (98.6% liked)

Technology

82460 readers
2461 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

you are viewing a single comment's thread
view the rest of the comments
[–] CileTheSane@lemmy.ca 1 points 3 hours ago

Whether an LLM can determine truth depends on your definition of truth

Of course someone who doesn't believe "truth" exists thinks LLMs are just fine. You have to not believe things can be true in order to find their output acceptable.

An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.

Every week I see a new post of an LLM being blantly wrong. LLMs said to add glue to pizza to make the cheese stick together.

"They have improved the models since then..." Last week the American military used "AI" and it targeted a school as a military structure. The models are full of shit, they just manually remove the blantly incorrect shit whenever they make the rounds, and there's always more blantly incorrect shit to be found.