this post was submitted on 13 Mar 2026
87 points (100.0% liked)

Technology

82581 readers
3756 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BananaIsABerry@lemmy.zip -2 points 9 hours ago (1 children)

Would it be valid, then, to say that a search engine is responsible when someone searches how to do a crime?

How about a forum where people talk about the subject, even if they themselves weren't going to participate in the crimes?

The chatbot is just another avenue to finding information you want to find.

I did read into the article and apparently they're suing because OpenAI had the account flagged as a potential harm to self or others, but they had already banned the original account. What more do you want them to do? Report them to the thought police?

[–] XLE@piefed.social 3 points 8 hours ago* (last edited 8 hours ago) (1 children)

If somebody on a forum was helping to plot ways to commit a crime, that person should probably be at least questioned. OpenAI's chatbot is that "somebody" in this case.

[–] ieGod@lemmy.zip -4 points 6 hours ago (1 children)

False equivalence. Tools are not people. We going after magic 8 balls too?

[–] XLE@piefed.social 3 points 6 hours ago* (last edited 6 hours ago) (1 children)

Come on, don't be so dishonest. Compare similar things. This "tool" is designed to create humanlike realtime communication, and it's run by a billionaire rapist who just as easily have groomed the killer himself (thanks to it being a black box "live service", we don't know where the grooming came from, do we).

I remember your previous comment from another thread:

Vulnerable people don't get to outsource responsibility.

But apparently billionaires do.

[–] ieGod@lemmy.zip -1 points 5 hours ago (1 children)

The tool isn't sentient, it operates on logical weights, and provides output that mimics its training set. LLMs are pretty impressive at what they can output, but it would be dishonest to attribute human qualities to it. There are decades of implementations of various AI techniques to varying degrees in attempts to achieve the same. It is on the technical basis, and the technical basis alone, that we should be carefully considering legal constraints.

How much a CEO is worth, how trustworthy they are, what cirlces they run in, shouldn't be part of that consideration.

That doesn't mean I think Altman isn't a turd who can suck a fat one.

[–] XLE@piefed.social 3 points 4 hours ago

Like I said, it is built to be human_like_. Of course it's not human or sentient, but Sam Altman sells ChatGPT with humanizing language, describes human attributes, and personally subsidized the grooming of people to commit suicide and homicide.