this post was submitted on 17 Nov 2025
32 points (88.1% liked)

Asklemmy

51368 readers
582 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Look, I don't believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the "knowledge" of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that's a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] yogthos@lemmy.ml 6 points 1 week ago

That's like asking what's the difference between a chef who has memorized every recipe in the world and a chef who can actually cook. One is a database and the other has understanding.

The LLM you're describing is just a highly sophisticated autocomplete. It has read every book, so it can perfectly mimic the syntax of human thought including the words, the emotional descriptions, and the moral arguments. It can put on a flawless textual facade. But it has no internal experience. It has never burned its hand on a stove, felt betrayal, or tried to build a chair and had it collapse underneath it.

AGI implies a world model which is an internal, causal understanding of how reality works, which we build through continous interaction with it. If we get AGI, then it's likely going to come from robotics. A robot learns that gravity is a real, it learns that "heavy" isn't an abstract concept but a physical property that changes how you move. It has to interact with its environment, and develop a predictive model that allows it to accomplish its tasks effectively.

This embodiment creates a feedback loop LLMs completely lack: action -> consequence -> learning -> updated model. An LLM can infer from the past, but an AGI would reason about the future because it operates with the same fundamental rules we do. Your super-LLM is just a library of human ghosts. A real AGI would be another entity in the world.