this post was submitted on 17 Nov 2025
32 points (88.1% liked)

Asklemmy

51378 readers
392 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Look, I don't believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the "knowledge" of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that's a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] SirEDCaLot@lemmy.today 17 points 1 week ago (2 children)

There was actually a paper recently that tested this exactly.

They made up a new type of problem that had never before been published. They wrote a paper explaining the problem and how to solve it.

They fed this to an AI, not as training material but as part of the query, and then fed it the same problem but with different inputs and asked it to solve it.
It could not.

AGI would be able to learn from the queries given to it, not just its training base data.

[โ€“] techpeakedin1991@lemmy.ml 7 points 1 week ago (1 children)

It's really easy to show this even with a known problem. Ask an LLM to play a game of chess, and then give it 1. h3 as a first move. They always screw up immediately, by making an illegal move. This happens because 1. h3 is hardly ever played, so it isn't part of it's model. In fact, it'll usually play a move that 'normally' responds to h3, like Bh5 for example

[โ€“] Gumus@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

I don't think AGI would be just a pure offline LLM. If you allowed it to code, it'd probably fare quite a bit better. An LLM is not a chess engine and was never meant to be. It's quite capable of using it as a tool, though.

I don't know if AGI is achievable in the near future but it wouldn't be a model. It'd be a system.