this post was submitted on 28 Aug 2025
600 points (99.8% liked)

Technology

74594 readers
3989 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] yesman@lemmy.world 103 points 1 day ago (1 children)

"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the blog post notes. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

See? Even the people who make AI don't trust it with important decisions. And the "trained" humans don't even see it if the AI doesn't flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.

This is exactly the use-case for an LLM and even OpenAI can't make it work.

[–] Perspectivist@feddit.uk 36 points 1 day ago (13 children)

This is exactly the use-case for an LLM

I don't think it is. LLM is language generating tool, not language understanding one.

load more comments (13 replies)
[–] Mr_Dr_Oink@lemmy.world 31 points 1 day ago

Of course they are. They are a literal data farm. People need to stop using it.

[–] Showroom7561@lemmy.ca 40 points 1 day ago

Funny, now you'll have the cops arresting you for prompts like "how to survive being homeless?", rather than social services when you prompt "how to avoid being homeless?".

And will authorities be called when someone prompts "how to shoot wild animals?" when asking about wildlife photography? 😆

[–] Kyrgizion@lemmy.world 48 points 1 day ago (2 children)

"Hey ChatGPT, how many human corpses can 12 pigs who haven't been fed in a week process"?

[–] db2@lemmy.world 21 points 1 day ago (6 children)

They don't eat teeth. Just saying.

[–] Regrettable_incident@lemmy.world 2 points 1 day ago (1 children)

You want to keep those for a necklace.

[–] db2@lemmy.world 2 points 21 hours ago

Everyone knows ears make a better necklace.

[–] fartographer@lemmy.world 15 points 1 day ago

Hence the phrase: as toothless as a pig

load more comments (4 replies)
load more comments (1 replies)
[–] granolabar@kbin.melroy.org 33 points 1 day ago (9 children)

Well if idiots share their crime plans with a corpoo, what do they expect.

However, police won't do anything.

But even if they do, prosecutors won't.

Just look at that incident in Nevada with Israeli pedophile.

Cops bust him, federal prosecutor refused to charge him..

Laws are enforced selectively

[–] DeathsEmbrace@lemmy.world 28 points 1 day ago* (last edited 1 day ago) (1 children)

Are you seriously comparing a corrupt Israeli politician to an average joe? Israel can get away with murdering Americans and they would apologize they didnt die earlier?

load more comments (1 replies)
[–] UnderpantsWeevil@lemmy.world 7 points 1 day ago

However, police won’t do anything.

This is the punchline to the joke of mass surveillance. You can have people doing crimes in clear view of the police and they just stand around. The police aren't for deterring crime, they're a jobs program and a human shield against harm to private property.

load more comments (7 replies)
[–] spankmonkey@lemmy.world 24 points 1 day ago (4 children)

Lazy authors of crime themed novels are sweating so heavily right now.

load more comments (4 replies)
[–] veni_vedi_veni@lemmy.world 9 points 1 day ago (2 children)

Yo, I was just joking about making a gallon of PCP

[–] Archer@lemmy.world 1 points 1 day ago

RIP Trevor. Still can’t believe he died trying to suck his own dick

load more comments (1 replies)
[–] mienshao@lemmy.world 15 points 1 day ago (4 children)

I’m cool with this! OpenAI ought to send those suicide instructions it created for that teen and send it to those pigs for some inspo :)

load more comments (4 replies)
[–] UnderpantsWeevil@lemmy.world 10 points 1 day ago

I gotta say... imagine being the police department on the receiving end of that firehose.

[–] Thedogdrinkscoffee@lemmy.ca 7 points 1 day ago* (last edited 1 day ago)

This sounds like PR deflection from egging on the kid to suicide. But I still don't doubt it.

[–] ArmchairAce1944@discuss.online 0 points 21 hours ago (6 children)

I asked chatgpt some spicy questions a long time ago (on the order of months), and it was about privacy questions and maintaining privacy online. I even asked it how to keep private from itself. I deleted that account many months ago and it was not immediately linked to my 'real' identity. I've also asked it some other spicy questions about stuff I won't reveal here. Again. Months ago, and by it's own admission if something is reported, it will be acted upon almost immediately. If something is super illegal (as in, I want to kill so-and-so individual) it not only reports it for immediate review but also reports the relevant law enforcement right away.

Chatgpt told me this.

The only way to use chatgpt if you must is with A: a throwaway email (must be reusable) such as those on darknet emails or something that doesnt link it to you by name (such as proton or tutamail). B: over the Tor network.

load more comments (6 replies)
[–] FireWire400@lemmy.world 9 points 1 day ago

Self-harm doesn't count apparently

[–] SlippiHUD@lemmy.world 7 points 1 day ago

Can call the police when you threaten to hurt others, actively prevents teens from seeking outside help when they talk about killing themselves.

[–] ExLisper@lemmy.curiana.net 6 points 1 day ago (6 children)

All of this is so fucking bizarre I can't even wrap my head around it anymore. It's a bot. How the fuck is it suddenly killing people? How is talking to a bot a crime now? Did everyone lose their minds?

load more comments (6 replies)

Well there's another reason to not use ChatGPT: "Tell me good slogans against the government" can get you arrested!

load more comments
view more: ‹ prev next ›