54
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 19 Dec 2024
54 points (100.0% liked)
TechTakes
1491 readers
106 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
This isn’t quite accurate. The criticism is that if new AI abilities run ahead of the ability to make the AI behave sensibly, we will reach an inflection point where the AI will be in charge of the humans, not vice versa, before we make sure that it won’t do horrifying things.
AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of. Of course, that’s a separate question from the question of whether funding any particular organization will lead to any increase in safety, or whether asking a chatbot about some imaginary scenario has anything to do with any of this.
What new AI abilities, LLMs aren't pokemon.
The AGI learned DECIEVE, but all i wanted it to learn is HUG.
Ah yes, if there’s one lesson to be gained from the last few years, it is that AI technology never changes, and people never connect it to anything in the real world. If only I’d used a Pokémon metaphor, I would have realized that earlier.
I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.
I wasn’t limiting it to LLMs specifically. I don’t think it is up for debate that as years go by, new “AI” stuff periodically starts existing that didn’t exist before. That’s still true even though people tend to overhype the capabilities of LLMs specifically and conflate LLMs with “AI” just because they are good at appearing more capabale than they are.
If you wanted to limit to to LLM and get some specifics about capabilities that start to emerge as the model size grows and how, here’s a good intro: https://arxiv.org/abs/2206.04615
lol, there has literally never been a gain of function claim that checked out
you're posting like an evangelist, this way to the egress
well yeah