this post was submitted on 07 Dec 2025
35 points (100.0% liked)
technology
24154 readers
253 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The author claims that the issue is that pieces in which — appear are more likely to be "high brow" or "literary" writing, which is a fact the model has access to. When they trained it, they didn't just feed it all the text in an undifferentiated mass: there was a bunch of manual human curation on the back end that told the model which sources were "high quality" pieces of literary writing, which were argumentative, which were academic, which were casual, and so on. That creates a set of biases in the data set, which is good! You don't want your model to weight professional writing and 4Chan comments the same, because that will also bias the data set in virtue of the fact that there's a lot more "casual" low-quality internet text than professional level writing in the training data, just on a word-by-word basis. But given that manual curation and hierarchy, the model extracted patterns we weren't expecting it to, and applied them to the output in ways that don't quite hit the mark of the intended task. It noticed that — is overrepresented in high quality writing compared to -- or -, so when you ask it to produce high quality writing, it just uses — a lot. It doesn't know anything about why — might appear more in its high quality samples; it just reproduces the statistical features of the text, because that's what it does.
This is a pretty classic over fitting problem. We ran into the same issue with early image classification models. There's one famous case in which a model that looked like it had gotten very good at discriminating between cancerous and benign skin moles based on a photograph fell flat after training. Closer investigation showed that it was basing most of its determination on the quality of the light in the photo: most of the "cancerous" training data set was shot in a clinical setting with harsh, cool-temperature lighting, while most of the benign data was shot in more naturalistic settings with warmer light. So it decided that bright, cool, institutional lighting was a feature it was looking for because (again) it doesn't know anything. All it can do is pull out statistical features of the data it's been fed: when it does that in the way we want, we call that a success, but when it does it in a way we don't want, we call it over fitting.
Thanks for taking the time to explain. Having read your comment, and thinking some more, I guess I can see how the thing not learning what you would ideally want it to learn (i.e. writing "good") and it just mimicking superficially what good quality writing looks like fits the definition of overfitting.
I guess I'm not expecting it to even be able to do this though, like what I expect from the thing is exactly to produce some shallow mimicry. I'm actually impressed it managed to figure out these superficial things. Learning the em dash is associated with quality isn't wrong, you would want it to learn that.
Also, if the training data is tagged, would it stop doing the em dash if the correct tags ("email", "reddit", ...) were used in the prompt?