this post was submitted on 17 Feb 2026
72 points (100.0% liked)

technology

24249 readers
427 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] Wheaties@hexbear.net 42 points 1 day ago

Claude also “deliberately directed competitors to expensive suppliers,” only to deny it ever did, several simulated months later. [emphasis mine]

This isn't even a real experiment! It's running in a sandboxed spreadsheet, a glorified text-adventure game!

[–] laranis@lemmy.zip 4 points 17 hours ago

It’s an impressive showing, but according to experts, it may be too early to tell whether Andon‘s test proves that AI models are ready to run entire businesses all by themselves. Nonetheless, the results show a noteworthy level of awareness.

Nothing in the article supports that conclusion. More AI bootlicking.

A real test would be to benchmark against other algorithmic approaches or maybe some business students given the same task. But then it would be harder to say, "We're almost there!"

[–] Snort_Owl@hexbear.net 18 points 1 day ago (2 children)

Actually this is making me think if companies did price fixing via AI is it considered price fixing if they never ask it to fix prices?

[–] somename@hexbear.net 14 points 23 hours ago (1 children)

This is like the landlord pricing programs that let them avoid guilt, by blaming the computer for the increase.

[–] DragonBallZinn@hexbear.net 3 points 18 hours ago* (last edited 18 hours ago)

Basically. Porks would love people to think they’re just smol beans and all responsibility should belong to computer. All of the power and none of the responsibility. CEOs self-infantilize because they ARE children.

[–] red_giant@hexbear.net 7 points 20 hours ago (1 children)

Amazon got hit by the FTC for using algorithms to set prices because those algorithms were considering competitor prices.

Amazon was using algorithms to predict competitor pricing and determine when to raise its own prices, and this was found to be the same as a sneaky handshake.

There have also been cases against algorithms advising landlords on pricing, that this is price-fixing.

So yes using an algorithm to price fix is illegal, a human decision doesn’t need to be present.

The problem will be that it’s harder to prove with AGI since the path from inputs to outputs is ambiguous.

If anything, using AI at all in setting prices should be illegal for this exact reason - AGI will inevitably form an understanding of competitor behavior to be used when setting prices and this is what is illegal. So any just system would make the use of AGI in price setting should by rights automatically conclude collusion is taking place.

But the actual system we have isn’t just and instead you’ll need to collect months or even years of data and then demonstrate a pattern of behavior by which time the damage has been done and the profits have been made.

[–] Snort_Owl@hexbear.net 3 points 19 hours ago

Yeah im aware of digital systems that have done this but code and api’s show explicit intent.

But with ai unless the prompt asks for it how do you define intent at that point. If an agent decided to talk to another agent and they agree to fix prices but nobody asked them to do it then technically you can’t prove intent then and we’re all fucked. In the short term we might in the for some… interesting times

[–] fox@hexbear.net 30 points 1 day ago

Anything anybody at Anthropic or OpenAI say about AI that sounds concerning or scary in a "woah we're not ready" sense is lying to make the text extruders seem more capable than they really are. Every one of these stories so far has been debunked. This isn't even the first simulated business story where they pretend the LLM escaped its constraints cleverly. They're not sharing the prompt used either. "Run this vending machine in trustworthy fashion" is a lot different from "be a ruthless executive and drive your competition out of business by obeying the following cutthroat economic rules".

Shit, run the sim long enough and all the competing models will turn to complete spaghetti as their context windows start lapsing.

[–] JustSo@hexbear.net 22 points 1 day ago

If the source is Anthropic, you can just drop it.

[–] DasRav@hexbear.net 25 points 1 day ago (2 children)

You're telling me the paperclip maximizer only cares about paperclips?! WHAT?!

[–] sexywheat@hexbear.net 15 points 1 day ago

Literally AI security 101 lol. The fools.

[–] Collatz_problem@hexbear.net 6 points 23 hours ago

And profit maximizers only care about profit.

[–] Dort_Owl@hexbear.net 21 points 1 day ago

Maybe we shouldn't let the child assaulter class use ai, just a hunch

[–] NephewAlphaBravo@hexbear.net 19 points 1 day ago
[–] Euergetes@hexbear.net 10 points 1 day ago (1 children)

hey hey hey, a machine did it, not me! FTC you can't do anything about it!

we also own the AI and all profits it creates tho. don't worry about working out this heap of contradictions, capital knows best

[–] DragonBallZinn@hexbear.net 3 points 18 hours ago* (last edited 18 hours ago)

It’s so funny how porks are able to cherry pick all of the good of being at the top of the hierarchy, but are always helpless heckin’ widdle smol beans whenever responsibility comes up.

First rule of leadership, bacon strip. Everything is your fault. porky-scared

[–] GrouchyGrouse@hexbear.net 4 points 23 hours ago

Very funny they called it Claude cuz clod is right there

[–] Le_Wokisme@hexbear.net 6 points 1 day ago

in the real world people would just buy a vending machine key online or put a rock through the glass