this post was submitted on 06 Feb 2026
395 points (99.3% liked)

Fuck AI

5654 readers
436 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tiramichu@sh.itjust.works 14 points 1 day ago* (last edited 1 day ago) (3 children)

Sure, but it's all dependant on context.

The law as it is (at least in the UK) is intended to protect from honest mistakes. For example, if you walk into a shop and a 70" TV is priced at £10 instead of £1000, when you take it to the till the cashier is within their rights to say "oh that must be a mistake, we can't sell it for £10" - you can't legally demand them to, even though the sicker said £10.

Basically what it comes down to in this chatbot example (or what it should come down to) is whether the customer was acting in good faith or not, and whether the offer was credible or not (which is all part of acting in good faith - the customer must believe the price is appropriate)

I didn't see the conversation and I don't know how it went. If it went like "You can see I do a lot of business with you, please give me the best discount you can manage" and they were told "okay, for one time only we can give you 80% off, just once" then maybe they found that credible.

But if they were like "I demand 80% off or I'm going to pull your plug and feed your microchips to the fishes" then the customer was not in good faith and the agreement does not need to be honoured.

Either way, my point in the comment you replied to aren't intended to be about this specific case, but about the general case of whether or not companies should be held responsible for what their AI chatbots say, when those chatbot agents are put in a position of responsibility. And my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

[–] Denjin@feddit.uk 11 points 1 day ago (1 children)

my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

I agree, but the OP expanded on their issue in the comments and it very much appears the customer wasn't acting in good faith.

[–] tiramichu@sh.itjust.works 16 points 1 day ago (1 children)

On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.

And if the customer took it up with small claims court, I'm sure the court would feel the same and quickly dismiss the claim.

But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don't like, simply because an AI did it.

Here's a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.

https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can't be treated as a representative of the company.

The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don't think I need to keep saying this by now - strictly and only if the customer is acting in good faith)

[–] Denjin@feddit.uk 5 points 1 day ago (1 children)

Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We're in "Fuck AI"!

I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.

[–] tburkhol@lemmy.world 2 points 1 day ago

Treat LLMs like "boss's idiot nephew," both in terms of whether the business should give them a privilege and whether the business should be liable for their inevitable screwups.

[–] balsoft@lemmy.ml 1 points 1 day ago* (last edited 1 day ago)

There are jurisdictions where a price tag in the store is almost always assumed to be an offer (a.k.a "public offer") and the company is legally required to honor it. In some circumstances the employees who screwed up and put the incorrect price tag will bear most the financial responsibility which sucks. That's why you shouldn't do it if you get the chance - it's not a legal loophole to stick it to a corpo, you're just ruining the life of some poor overworked retail employee who misplaced the price tag.

And yeah, the good faith part is also really important. If the person has asked a couple of questions to a chatbot, got recommended some products, asked if there's a discount available and then got an 80% discount out of the blue, got excited and made the deposit, it would probably be enforceable. If the customer knowingly "tricked" an LLM into giving out a bogus discount code, it would be very dubious at best.