this post was submitted on 15 Dec 2025
26 points (100.0% liked)

TechTakes

2335 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)

you are viewing a single comment's thread
view the rest of the comments
[–] rook@awful.systems 9 points 1 day ago (2 children)

So, I’m taking this one with a pinch of salt, but it is entertaining: “We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.”

The whole exercise was clearly totally pointless and didn’t solve anything that needed solving (like every other “ai” project, i guess) but it does give a small but interesting window into the mindset of people who have only one shitty tool and are trying to make it do everything. Your chatbot is too easily lead astray? Use another chatbot to keep it in line! Honestly, I thought they were already doing this… I guess it was just to expensive or something, but now the price/desperation curves have intersected

Anthropic had already run into many of the same problems with Claudius internally so it created v2, powered by a better model, Sonnet 4.5. It also introduced a new AI boss: Seymour Cash, a separate CEO bot programmed to keep Claudius in line. So after a week, we were ready for the sequel.

Just one more chatbot, bro. Then prompt injection will become impossible. Just one more chatbot. I swear.

Anthropic and Andon said Claudius might have unraveled because its context window filled up. As more instructions, conversations and history piled in, the model had more to retain—making it easier to lose track of goals, priorities and guardrails. Graham also said the model used in the Claudius experiment has fewer guardrails than those deployed to Anthropic’s Claude users.

Sorry, I meant just one more guardrail. And another ten thousand tokens capacity in the context window. That’ll fix it forever.

https://archive.is/CBqFs

[–] nfultz@awful.systems 6 points 23 hours ago* (last edited 22 hours ago) (1 children)

Why is WSJ rehashing six month old whitepapers? Slow news week /s

Anything new vs the last time it popped up? https://www.anthropic.com/research/project-vend-1

EDIT:

In mid-November, I agreed to an experiment. Anthropic had tested a vending machine powered by its Claude AI model in its own offices and asked whether we’d like to be the first outsiders to try a newer, supposedly smarter version.

Whats that word for doing the same thing and expecting different results?

[–] rook@awful.systems 2 points 6 hours ago

Hah! Well found. I do recall hearing about another simulated vendor experiment (that also failed) but not actual dog-fooding. Looks like the big upgrade the wsj reported on was the secondary “seymour cash” 🙄 chatbot bolted on the side… the main chatbot was still claude v3.7, but maybe they’d prompted it harder and called that an upgrade.

I wonder if anthropic trialled that in house, and none of them were smart enough to break it, and that’s what lead to the external trial.