this post was submitted on 23 Nov 2025
37 points (82.5% liked)

Technology

77104 readers
2396 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

you are viewing a single comment's thread
view the rest of the comments
[–] Aatube@lemmy.dbzer0.com 1 points 12 hours ago* (last edited 12 hours ago)

Citation needed.

This is a New York Times article. By default, the New York Times is the citation, just like every other MSM. And even then, this specific article does attribute it:

To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers. Some of these people spoke with the company’s approval, and have been working to make ChatGPT safer. Others spoke on the condition of anonymity because they feared losing their jobs.

Claude is trying to lick my ass clean every time I ask it a simple question

The article only said they made a test, not that they weren't failing it, which happens to be what the linked paper says. This is not new as LLMs also always failed a certain intelligence test devised around that same time period until ~2024.

As soon as they found experts who were willing to say something else than "don't make a chatbot".

That's 55%: https://humanfactors.jmir.org/2025/1/e71065