this post was submitted on 17 Nov 2025
32 points (88.1% liked)

Asklemmy

51368 readers
736 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Look, I don't believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the "knowledge" of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that's a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Dionysus@leminal.space 10 points 1 week ago (2 children)

An LLM will only know what it knows.

AGI will be able to come up with novel information or work though things it's never been trained on.

[โ€“] andrewrgross@slrpnk.net 7 points 1 week ago (1 children)

This is what I was going to say.

Also, long form narrative. Right now LLMs seem to work best for short conversations, but get increasingly unhinged over very long conversations. And if they generate a novel, it's not consistent or structured, from what I understand.

[โ€“] Dionysus@leminal.space 3 points 1 week ago

You're spot on for all of that. Context windows have a lot to do with the unhinged behavior right now... But it's a fundamental trait of how LLMs work.

For example, you can tell it to refer to you by a specific name and once it stops you know the context window is overrun and it'll go off the rails soon... The newer chat bots have mitigations in place but it still happens a lot.

These are non-deterministic predictive text generators.

Any semblance of novel thoughts is due to two things for modern LLMs:

  • Model "temperature": a setting that determines how much "randomness" there is.. with a value of 0 it will generate exactly what it can find that exactly follows what you gave it the best it can. Note it often breaks when you try this.

  • It has more information than you: I've had interesting interactions with work where it came up with actually good ideas. These are all accounted for by MCPs allowing it to search and piece things together or the post training refinements and catalog augmentation though.

[โ€“] stray@pawb.social 1 points 1 week ago (1 children)

How does novel information differ from hallucinating?

Part of me thinks about the example of the "full glass of wine" problem, but I think that matches better as working through something it's never been trained on.

[โ€“] Dionysus@leminal.space 1 points 1 week ago

How does novel information differ from hallucinating? It doesn't really, it's a Bob Ross happy accident aided by the large amounts of information making it mostly reasonable most of the time.

When it gets it right, like really right and you wonder if it's really thinking, that's the same amazing moment as when it repeats 'echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;echo;' forever.