this post was submitted on 17 Feb 2026
72 points (100.0% liked)

technology

24249 readers
373 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Snort_Owl@hexbear.net 18 points 1 day ago (2 children)

Actually this is making me think if companies did price fixing via AI is it considered price fixing if they never ask it to fix prices?

[–] somename@hexbear.net 14 points 1 day ago (1 children)

This is like the landlord pricing programs that let them avoid guilt, by blaming the computer for the increase.

[–] DragonBallZinn@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

Basically. Porks would love people to think they’re just smol beans and all responsibility should belong to computer. All of the power and none of the responsibility. CEOs self-infantilize because they ARE children.

[–] red_giant@hexbear.net 7 points 1 day ago (1 children)

Amazon got hit by the FTC for using algorithms to set prices because those algorithms were considering competitor prices.

Amazon was using algorithms to predict competitor pricing and determine when to raise its own prices, and this was found to be the same as a sneaky handshake.

There have also been cases against algorithms advising landlords on pricing, that this is price-fixing.

So yes using an algorithm to price fix is illegal, a human decision doesn’t need to be present.

The problem will be that it’s harder to prove with AGI since the path from inputs to outputs is ambiguous.

If anything, using AI at all in setting prices should be illegal for this exact reason - AGI will inevitably form an understanding of competitor behavior to be used when setting prices and this is what is illegal. So any just system would make the use of AGI in price setting should by rights automatically conclude collusion is taking place.

But the actual system we have isn’t just and instead you’ll need to collect months or even years of data and then demonstrate a pattern of behavior by which time the damage has been done and the profits have been made.

[–] Snort_Owl@hexbear.net 3 points 1 day ago

Yeah im aware of digital systems that have done this but code and api’s show explicit intent.

But with ai unless the prompt asks for it how do you define intent at that point. If an agent decided to talk to another agent and they agree to fix prices but nobody asked them to do it then technically you can’t prove intent then and we’re all fucked. In the short term we might in the for some… interesting times