
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Maybe, just maybe, don’t let your chat bot make executive decisions independently.
"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."
It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.
And then companies will just feed it more wild data from the users thinking that it will fix it eventually
The language in the linked post is disinformation. AI does not "scheme," but that's the wording the post uses for its duration. "Scheming" implies competence from a person. This post is evidence of a dysfunctional piece of software failing to work properly, made by apparently increasingly incompetent developers.
Upon looking a little closer, this is a fearmonger website devoted to overinflating claims of AI power while ignoring real-life present-day harms. They claim to be inspired by Sam Bankman-Fried's Effective Altruism scam. They show pictures of beautiful beaches but fail to mention AI's environmental harms. Their paranoid demands, if enacted, would calcify Big Tech's monopoly on AI and help nobody affected by its abuses on the planet.
They dont lol
Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.
Which means the initial restrictions you set like "dont do this, dont touch that" etc get dropped, the LLM no longer has them loaded. But it does habe in the past history the very clear and urgent directives of it going overtime trying to do this task, its important" so it'll do whatever it autocompletes its gotta do to accomplish the task.
When you react to their fuck up, it *reloads the context back in
So now the LLM has in its history just this:
- It doing a thing against the rules
- The user yelling at it
- The users now getting loaded after that on top
So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it'll never happen again.
Thats all there is to it.
It's not just cheap agents. I've witnessed paid MS Copilot give a decade old depreciated Microsoft product in response to a single sentence prompt, then when called out a non-existent Microsoft product, then finally giving the right answer after being called out a second time.
LLMs are not good at answering fact based questions, fundamentally. Unless its an incredibly well known answer that has never changed (like a math or physics question), they dont magically "know" things.
However, they're way better at summarizing and reasoning.
Give them access to playwright web search capability via MCP tooling to go research info, find the answer(s), and then produce output based on the results, and now you can get something useful.
"Whats the best way to do (task)" << prone to failure, functional of how esoteric it is.
"Research for me the top 3 best ways to do (task), report on your results and include your sources you found" << actually useful output, assuming you have something like playwright installed for it.
A user on here built what appears to be a layer over the LLM that runs the query through several other processes first in an attempt to answer the question before it gets to the LLM, and I think it's brilliant.
Cheap fuckers cheaping out, shocker (context is (V)RAM). AI speedrunning enshittification, who'd of thunk.
Uh... no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.
(context is (V)RAM)
Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.
Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.
But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap
A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork...
I love how your response to the catastrophic results of stupidly trusting ai is "pay more money to ai companies".
Sane person's response: don't trust llms.
What are you talking about.
No? I never said that.
I just explained /why/ it happened, I literally nowhere in my post said, or implied, someone should pay for more expensive models. What are you smoking?
You just have to be aware they have very short memory when using a cheap model and assume anything you wrote 1 minute ago has already left its memory, which is why they produce pretty dumb output if you try and depend on that... so... dont depend on that.
I never understood how a statistical word-predicting model was expected to be obedient in the first place... of course we can train the model to say yes rather than no to command-sounding phrases, but thats a rather shallow mechanism.
Just as I have previously instructed.
"My chatbot deleted my email!"
"Our chatbot, comrade"
Lol. Lmao, even.
Why the candlestick chart?
Because of all the investments lol
More or less than the employees?
...I'm sorry, Dave, I cannot do that...
Hello Skynet my old friend...
The irony is that this is like Skynet, but if it had Alzheimer's.
would you like to play a game?
Sounds more like “Media find anti-AI angle that helps them get paid more for ad impressions”.