90
Research AI model unexpectedly modified its own code to extend runtime
(arstechnica.com)
This is a most excellent place for technology news and articles.
Well... now the paperclip thought experiment becomes slightly more prescient.
Everyone's like, "It's not that impressive. It's not general AI." Yeah, that's the scary part to me. A general AI could be told, "btw don't kill humans" and it would understand those instructions and understand what a human is.
The current way of doing things is just digital guided evolution, in a nutshell. Way more likely to create the equivalent of a bacteria than the equivalent of a human. And it's not being treated with the proper care because, after all, it's just a language model and not general AI.
Yup. A seriously intelligent AI we probably wouldn't have to worry too much about. Morality, and prosocial behavior are logical and safer than the alternative.
But a dumb AI that manages to get too much access is extremely risky.