this post was submitted on 28 Mar 2026
147 points (92.0% liked)

Technology

83222 readers
4945 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Full Report(76 Pages PDF).

top 25 comments
sorted by: hot top controversial new old
[–] aggelalex@lemmy.world 3 points 1 day ago
[–] panda_abyss@lemmy.ca 7 points 1 day ago

Maybe, just maybe, don’t let your chat bot make executive decisions independently.

[–] snooggums@piefed.world 50 points 2 days ago (1 children)

"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."

It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.

[–] Cellari@lemmy.world 9 points 2 days ago

And then companies will just feed it more wild data from the users thinking that it will fix it eventually

[–] XLE@piefed.social 43 points 2 days ago* (last edited 2 days ago)

The language in the linked post is disinformation. AI does not "scheme," but that's the wording the post uses for its duration. "Scheming" implies competence from a person. This post is evidence of a dysfunctional piece of software failing to work properly, made by apparently increasingly incompetent developers.

Upon looking a little closer, this is a fearmonger website devoted to overinflating claims of AI power while ignoring real-life present-day harms. They claim to be inspired by Sam Bankman-Fried's Effective Altruism scam. They show pictures of beautiful beaches but fail to mention AI's environmental harms. Their paranoid demands, if enacted, would calcify Big Tech's monopoly on AI and help nobody affected by its abuses on the planet.

[–] pixxelkick@lemmy.world 31 points 2 days ago (2 children)

They dont lol

Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.

Which means the initial restrictions you set like "dont do this, dont touch that" etc get dropped, the LLM no longer has them loaded. But it does habe in the past history the very clear and urgent directives of it going overtime trying to do this task, its important" so it'll do whatever it autocompletes its gotta do to accomplish the task.

When you react to their fuck up, it *reloads the context back in

So now the LLM has in its history just this:

  1. It doing a thing against the rules
  2. The user yelling at it
  3. The users now getting loaded after that on top

So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it'll never happen again.

Thats all there is to it.

[–] village604@adultswim.fan 2 points 1 day ago (1 children)

It's not just cheap agents. I've witnessed paid MS Copilot give a decade old depreciated Microsoft product in response to a single sentence prompt, then when called out a non-existent Microsoft product, then finally giving the right answer after being called out a second time.

[–] pixxelkick@lemmy.world 2 points 1 day ago (1 children)

LLMs are not good at answering fact based questions, fundamentally. Unless its an incredibly well known answer that has never changed (like a math or physics question), they dont magically "know" things.

However, they're way better at summarizing and reasoning.

Give them access to playwright web search capability via MCP tooling to go research info, find the answer(s), and then produce output based on the results, and now you can get something useful.

"Whats the best way to do (task)" << prone to failure, functional of how esoteric it is.

"Research for me the top 3 best ways to do (task), report on your results and include your sources you found" << actually useful output, assuming you have something like playwright installed for it.

[–] village604@adultswim.fan 1 points 1 day ago

A user on here built what appears to be a layer over the LLM that runs the query through several other processes first in an attempt to answer the question before it gets to the LLM, and I think it's brilliant.

[–] MalReynolds@slrpnk.net 2 points 2 days ago (1 children)

Cheap fuckers cheaping out, shocker (context is (V)RAM). AI speedrunning enshittification, who'd of thunk.

[–] pixxelkick@lemmy.world 2 points 2 days ago (1 children)

Uh... no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.

(context is (V)RAM)

Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.

Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.

But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap

A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork...

[–] davidagain@lemmy.world 0 points 1 day ago (1 children)

I love how your response to the catastrophic results of stupidly trusting ai is "pay more money to ai companies".

Sane person's response: don't trust llms.

[–] pixxelkick@lemmy.world 1 points 1 day ago

What are you talking about.

No? I never said that.

I just explained /why/ it happened, I literally nowhere in my post said, or implied, someone should pay for more expensive models. What are you smoking?

You just have to be aware they have very short memory when using a cheap model and assume anything you wrote 1 minute ago has already left its memory, which is why they produce pretty dumb output if you try and depend on that... so... dont depend on that.

[–] zeca@lemmy.ml 1 points 1 day ago

I never understood how a statistical word-predicting model was expected to be obedient in the first place... of course we can train the model to say yes rather than no to command-sounding phrases, but thats a rather shallow mechanism.

[–] aviationeast@lemmy.world 18 points 2 days ago (1 children)

Just as I have previously instructed.

[–] urushitan@kakera.kintsugi.moe 13 points 2 days ago

"My chatbot deleted my email!"

"Our chatbot, comrade"

Lol. Lmao, even.

[–] OhmsLawn@lemmy.world 3 points 2 days ago (1 children)

Why the candlestick chart?

[–] kurwa@lemmy.world 7 points 2 days ago

Because of all the investments lol

[–] etchinghillside@reddthat.com 1 points 2 days ago

More or less than the employees?

[–] notsure@fedia.io 1 points 2 days ago

...I'm sorry, Dave, I cannot do that...

[–] devolution@lemmy.world 1 points 2 days ago (2 children)

Hello Skynet my old friend...

[–] Telorand@reddthat.com 3 points 2 days ago

The irony is that this is like Skynet, but if it had Alzheimer's.

[–] notsure@fedia.io 1 points 2 days ago

would you like to play a game?

[–] Codpiece@feddit.uk 0 points 2 days ago* (last edited 2 days ago)

Sounds more like “Media find anti-AI angle that helps them get paid more for ad impressions”.