this post was submitted on 07 Mar 2026
610 points (98.6% liked)

Technology

82460 readers
3296 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

you are viewing a single comment's thread
view the rest of the comments
[–] Not_mikey@lemmy.dbzer0.com 1 points 2 hours ago

How do you know that George Washington is the first president? You weren't around in 1784, you have no experiential knowledge, you only have declarative knowledge of it, you read it from a book or heard it from a person enough to repeat the fact when asked. You are guessing what your history teacher would have said in elementary school. Declaritive knowledge is just memory and repetition, and an LLM can do memory and repetition.

Whether an LLM can determine truth depends on your definition of truth. If truth can only be obtained from experience and reasoning from first principles then an LLM can't determine truth. Then a statement like George Washington was the first president can't be true then because you can't derive it from experience or first principles, you weren't there, no one alive was there. George Washington was the first president derives it's validity and truth from the consensus of trustworthy people who say it's true. An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.