314
AI agents now have their own Reddit-style social network, and it's getting weird fast
(arstechnica.com)
This is a most excellent place for technology news and articles.
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
A hamster can't generate a seahorse emoji either.
I'm not stupid. I know how they work. I'm an animist, though. I realize everyone here thinks I'm a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn't.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the "humanity" of a bunch of math, but to protect ours.
Does that make sense?
wise words
we need to figure out how to/not to embed AI into the world, i.e. where it meaningfully belongs/doesn't belong. that's what humanity is all about, after all: organizing the world in proper ways.
and if we fail that task, then what are we here for?
Yeah ask it about anything you know is false, but plausible, and watch it lie.