this post was submitted on 26 Aug 2025
15 points (100.0% liked)

Technology

627 readers
459 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No external video links, only native(.mp4,...etc) links under 5 mins.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

Misc:

Relevant Lemmy Communities:

founded 4 months ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] TheLeadenSea@sh.itjust.works 5 points 3 weeks ago (1 children)

The suit says the teen’s interaction with the OpenAI product and its outcome was “not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices”

This is really sad and absolutely should not have been able to happen, but it's important to remember Hanlon's Razor in cases like these. What motivation could Sam Altman have to actually encourage suicide of young people?

Never attribute to malice that which can be attributed to incompetence - or in this case, greed. An evil, to be sure, still, but a lesser evil than actually willing for the user to die.

It's likely that a combination of training data, instructions that weren't properly thought out, and rushing because of greed caused this. And it's easy to in hindsight become so angry at what could have been slightly changed to make this not happen, but remember at the time we might have wanted those changes still, but they were not pushed for with such vehemence.

Also it's important to note that is is unlikely to have happened with an open weights model that can be tweaked and evaluated by the full international community, rather than one monolithic, 'Open' AI company.

I think the case is more that OpenAI made certain design choices not with the goal of it driving people to suicide, but with suicide as a possible and in their eyes acceptable cost?

[–] corroded@lemmy.world 3 points 3 weeks ago (1 children)

Giving a child unrestricted access to the internet is a terrible idea. I'm not trying to downplay the AI issues they brought up, but the parents are largely to blame, too. Parental controls, monitoring software, etc all exist for a reason.

[–] TheLeadenSea@sh.itjust.works 2 points 3 weeks ago

I'm sure theoretically good parents could exist, who actually protect and do not indoctrinate their children, but in my experience internet controls are more often used by religious or bigoted patents to prevent their children from accessing atheist, LGBT+ content or online communities that could break them out of their bubble - and then children find ways to see some stuff they didn't want them to anyway, which is far worse in content than if the parents just fostered trust with their children and they didn't feel the need to circumvent anything.