"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."
It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.
This is a most excellent place for technology news and articles.
"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."
It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.
And then companies will just feed it more wild data from the users thinking that it will fix it eventually
The language in the linked post is disinformation. AI does not "scheme," but that's the wording the post uses for its duration. "Scheming" implies competence from a person. This post is evidence of a dysfunctional piece of software failing to work properly, made by apparently increasingly incompetent developers.
Upon looking a little closer, this is a fearmonger website devoted to overinflating claims of AI power while ignoring real-life present-day harms. They claim to be inspired by Sam Bankman-Fried's Effective Altruism scam. They show pictures of beautiful beaches but fail to mention AI's environmental harms. Their paranoid demands, if enacted, would calcify Big Tech's monopoly on AI and help nobody affected by its abuses on the planet.
They dont lol
Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.
Which means the initial restrictions you set like "dont do this, dont touch that" etc get dropped, the LLM no longer has them loaded. But it does habe in the past history the very clear and urgent directives of it going overtime trying to do this task, its important" so it'll do whatever it autocompletes its gotta do to accomplish the task.
When you react to their fuck up, it *reloads the context back in
So now the LLM has in its history just this:
So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it'll never happen again.
Thats all there is to it.
Cheap fuckers cheaping out, shocker (context is (V)RAM). AI speedrunning enshittification, who'd of thunk.
Uh... no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.
(context is (V)RAM)
Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.
Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.
But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap
A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork...
Maybe, just maybe, don’t let your chat bot make executive decisions independently.
Just as I have previously instructed.
"My chatbot deleted my email!"
"Our chatbot, comrade"
Lol. Lmao, even.
Why the candlestick chart?
Because of all the investments lol
Sounds more like “Media find anti-AI angle that helps them get paid more for ad impressions”.
More or less than the employees?
...I'm sorry, Dave, I cannot do that...
Hello Skynet my old friend...
The irony is that this is like Skynet, but if it had Alzheimer's.
would you like to play a game?