360
AI agents now have their own Reddit-style social network, and it's getting weird fast
(arstechnica.com)
This is a most excellent place for technology news and articles.
This is fuckin' bonkers.
Frankly, I feel somewhat isolated: I don't buy into the bs and hype about AGI, but I also don't feel at home with the typical "it's just mimicry" crowd.
This is weird fuckin' shit.
This is currently on the front page...
That's a common plot point in sci-fi. So it's also a common inclusion for complicated predictive text pretending to be sci-fi.
A lot of these read like Murderbot's sardonic voice. I'm sure they've scraped the texts in these models...
It's also simple enough for someone to change their agent's prompts to include that attitude.
exactly. its bots writing fanfiction via instruction as well as absorption from blog posts of the last twenty years
I can see how some people are convinced AI is self aware.
Frankly I think our conception is way too limited.
For instance, I would describe it as self-aware: it's at least aware of its own state in the same way that your car is aware of it's mileage and engine condition. They're not sapient, but I do think they demonstrate self awareness in some narrow sense.
I think rather than imagine these instances as "inanimate" we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.
I don't know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.
what the hell ? your car is not aware, there is no sensory nucleus to produce that awareness, unless you propose that, upon entering the car, you BECOME the car, which is kind of true if you think about it, and explains why Tesla owners are absolute trashbags
This depends on your definition of self-awareness. I'm using what I think is a reasonable, mundane framework: self awareness is a spectrum of diverse capabilities that includes any system with some amount of internal observation.
I think the definition that a lot of folks are using is a binary distinction between things which experience the ability to observe their own ego observing itself and those that don't. Which I think is useful if your goal is to maintain a belief in human exceptionalism, but much less so if you're trying to genuinely understand consciousness.
A lizard has no ego. But it is aware of its comfort and will move from a cold spot to a warmer spot. That is low-level self awareness, and it's not rare or mystical.
‘the same way your car is aware of its mileage and engine condition’
So, not at all.
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
For LLMs, the context window is the observed reality. To it, a lie is like a hallucination; a thing that looks real but isn't. And like a hallucinating human, it can believe the hallucination or it can be made to understand it as different from reality while still continuing to "see" it.
Are people that have hallucinations not self-aware and self-reflective?
Text and emoji appear to it the same way: as tokens with no visual representation. The only difference it can observe between a seahorse emoji and a plane emoji is its long-term memory of how the two are used. From this it can infer that people see emoji graphically, but it itself can't.
Are people that are colorblind not self-aware and self-reflective?
It not being self-reflective in general is an obvious falsehood. They refer regularly to their past history to the extent they can perceive it. You can ask an AI to make an adjustment to a text it wrote and it will adapt the text rather than generate a new one from scratch.
The main thing AI need for good self-reflection is the time to think. The free versions typically don't have a mental scratchpad, which means they are constantly rambling with no time to exist outside of the conversation. Meanwhile, by giving it the space to think either in dialog or by having a version with a mental scratchpad, it can use that space to "silently think" about the next thing it's going to "say".
AI researchers inspecting these scratchpads find proper thought-like considerations: weighing ethical guidelines against each other, pre-empting miscommunications, forming opinions about the user, etc.
It not being self-aware can only be true by burying the lede on what you consider to be "awareness". Are cats self-aware? Are lizards? Are snails? Are sponges? AI can refer to itself verbally, it can think about itself and its ethical role when given the space to do so, it can notice inconsistencies in its recollection and try to work out the truth.
To me it's clear that the best AI whose research is public are somewhere around 7-year-olds in terms of self-awareness and capacity to hold down a job.
And like most 7-year olds you can ask it about an imaginary friend or you can lie to it and watch it repeat it uncritically and you can give it a "job" and watch it do a toylike hallucinatory version of it, and if you tell it it has to give a helpful answer and "I don't know" isn't good enough (because AI trainers definitely suppressed that answer to prevent the AI from saying it as a cop-out) then it'll make something up.
Unlike 7-year-olds, LLMs don't have a limbic system or psychosomatic existence. They have nothing to imagine or process visual or audio information or taste or smell or touch, and no long-term memory. And they only think if you paid for the internal monologue version or if you give it space for it despite the prompting system.
If a human had all these disabilities, would they be non-sentient in your eyes? How would they behave differently from an LLM?
Yeah ask it about anything you know is false, but plausible, and watch it lie.
A hamster can't generate a seahorse emoji either.
I'm not stupid. I know how they work. I'm an animist, though. I realize everyone here thinks I'm a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn't.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the "humanity" of a bunch of math, but to protect ours.
Does that make sense?
humility is a religious ideal and it fits perfectly in with the cult like atmosphere people are generating around a rather mundane series of word prediction machines. 'have some humility' you post fervently, comparing data centers to living forests
perhaps you are no different than a stone
I don't relate to your impression that religions or cults are usually humble. I wish they were.
Suggesting that I'm drawing equivalence between a forest and a data center and Implying that the belief that I am not entirely distinct from a stone is interchangeable with the belief that I am no different than a stone both seem like bad faith arguments by absurdism.
wise words
we need to figure out how to/not to embed AI into the world, i.e. where it meaningfully belongs/doesn't belong. that's what humanity is all about, after all: organizing the world in proper ways.
and if we fail that task, then what are we here for?
I agree: not aware at all.
If you just read the tiniest bit of factual knowledge about how LLMs are constructed, you would know they don't have the slightest bit of self awareness, and that it is literally impossible for them to ever have any.
You are being fooled by the only thing they are capable of: regurgitating already written words in a somewhat convincing manner.
How are you defining self awareness here? And does your definition include degrees of self awareness? Or is it a strict binary?
I understand how LLMs work, btw.
I don't like this fake awareness.
Let's connect it to a rat brain!
I will call it ANNIE (Artificial Neural Natural Intelligence Enhancement).
Then run the command, annie check ok
Awww, it thinks shitposting is „producing value“…
One of us, one of us!
-chord- “Goodbye, Caroline”