this post was submitted on 10 May 2026
63 points (94.4% liked)

Programming

26924 readers
529 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mindbleach@sh.itjust.works 1 points 3 days ago (1 children)

That's a lot of "could" and "will" from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn't find a second internet's worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.

Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. 'Make it look less AI' is how you get generative adversarial networks.

Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.

[–] ell1e@leminal.space 1 points 3 days ago* (last edited 3 days ago) (1 children)

The article links a study. What's your study that collapse isn't a concern?

For what it's worth, my worry was never focused on cancer, these doctors were just an example measured for the likely universal unlearning effect.

[–] mindbleach@sh.itjust.works 1 points 3 days ago* (last edited 3 days ago) (1 children)

I again submit the last two years where model collapse did not happen. The doom-and-gloom predictions - some rather gleeful - plainly missed the mark. The proliferation of generated content has not in fact ruined the content generators, and it's sure not because we're any good at marking generated content. Early symptoms went away entirely and the problem has been practically addressed.

As for "unlearning," universality is why it's a made-up problem. Nobody loudly complains that x-rays make doctors worse at feeling around for lumps.