this post was submitted on 23 Mar 2026
9 points (62.9% liked)

Technology

82992 readers
2656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Technus@lemmy.zip 17 points 10 hours ago (1 children)

If you prompted an LLM to review all of it's database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking.

That's kind of what the current agentic AI products like Claude Code do. The problem is context rot. When the context window fills up, the model loses the ability to distinguish between what information is important and what's not, and it inevitably starts to hallucinate.

The current fixes are to prune irrelevant information from the context window, use sub-agents with their own context windows, or just occasionally start over from scratch. They've also developed conventional AGENTS.md and CLAUDE.md files where you can store long-term context and basically "advice" for the model, which is automatically read into the context window.

However, I think an AGI inherently would need to be able to store that state internally, to have memory circuits, and "consciousness" circuits that are connected in a loop so it can work on its own internally encoded context. And ideally it would be able to modify its own weights and connections to "learn" in real time.

The problem is that would not scale to current usage because you'd need to store all that internal state, including potentially a unique copy of the model, for every user. And the companies wouldn't want that because they'd be giving up control over the model's outputs since they'd have no feasible way to supervise the learning process.

Yeah I think for it to be a proper strange loop (if that is indeed a useful proxy for consciousness-- I think there's room for debate on that) it would need to be able to take it's entire "self" i.e. the whole model, weights, and all memories, as input in order to iterate on itself. I agree that it probably wouldn't work for the current commercial applications of LLMs, but it not what being what commercial LLMs do, doesn't mean it couldn't be done for research purposes.