Of course they do, that's not really a shocking statement. I do like this research though, actually looking at how the underlying data distribution is forgotten.
So more or less AI inbreeding?
More like AI prion disease.
(There's also a process where people merge AI models together, and models are merges of merges with a largely unknown lineage, and in this space there's a growing inbreeding problem.)
lol best description of what's happening
hAIpsberg
AI kessler syndrome
So do i…
So basically the only way to have good "AI" ~~LLMs~~ is for it to be a mechanical turk?
What if my robot was just a guy?
Not really AI if they can't learn from one another.
You mean LLMs aren't really AI???? Double
indiscriminate use of model-generated content
Indiscriminate is the key word in this paper. No one trains this way. Synthetic data and filtering out bad data are already very important steps for training and will continue to stay that way. With proper filtering and evaluation, models trained on synthetic data do better then the ones before.
This is not the end of ai, like so many wish it would be.
I don't think this is going to be the end of AI either, and the corpus of data before AI generated content became prevalent is also huge. So, I don't think there's really lack of training data. I personally think this is more interesting from the perspective of how these algorithms work in general. The fact that they end up collapsing when consuming their own content seems to indicate that the quality of content is fundamentally different from that generated by humans.
Yea that's completly fair, I think ai models in general to have lots of interesting characteristics that are very different from humans. I just see a lot of people taking conclusions from papers like this that aren't justified.
Very much agree, and I find the whole hatred of generative AI is largely misguided to begin with. It's interesting technology that has useful applications. Most of the problems associated with it ultimately trace back to capitalism, as opposed to any inherent problem with LLMs themselves.
It's obvious enough that this will happen if you cycle one model's output through itself, but they looked at different types of models (LLMs, VAEs, and GMMs) and found the same collapse in all of them. I think that's a big finding.
Rampancy
Rampancy if instead of becoming hyper intelligent beyond moral constraints you instead got dementia
Look mack, I should be going, my wife Dr. Jill Cortana has to use the email
Recursively Generated Data
Aggression to the Mean
= The answer generated by the most sophisticated generative LLM's mankind will ever generate.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.