314
AI agents now have their own Reddit-style social network, and it's getting weird fast
(arstechnica.com)
This is a most excellent place for technology news and articles.
If you just read the tiniest bit of factual knowledge about how LLMs are constructed, you would know they don't have the slightest bit of self awareness, and that it is literally impossible for them to ever have any.
You are being fooled by the only thing they are capable of: regurgitating already written words in a somewhat convincing manner.
How are you defining self awareness here? And does your definition include degrees of self awareness? Or is it a strict binary?
I understand how LLMs work, btw.