this post was submitted on 07 Dec 2025
35 points (100.0% liked)

technology

24154 readers
253 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Archive link

Actually decent article from the New York Crimes on AI generated text.

you are viewing a single comment's thread
view the rest of the comments
[–] Philosoraptor@hexbear.net 11 points 1 month ago (1 children)

The author claims that the issue is that pieces in which — appear are more likely to be "high brow" or "literary" writing, which is a fact the model has access to. When they trained it, they didn't just feed it all the text in an undifferentiated mass: there was a bunch of manual human curation on the back end that told the model which sources were "high quality" pieces of literary writing, which were argumentative, which were academic, which were casual, and so on. That creates a set of biases in the data set, which is good! You don't want your model to weight professional writing and 4Chan comments the same, because that will also bias the data set in virtue of the fact that there's a lot more "casual" low-quality internet text than professional level writing in the training data, just on a word-by-word basis. But given that manual curation and hierarchy, the model extracted patterns we weren't expecting it to, and applied them to the output in ways that don't quite hit the mark of the intended task. It noticed that — is overrepresented in high quality writing compared to -- or -, so when you ask it to produce high quality writing, it just uses — a lot. It doesn't know anything about why — might appear more in its high quality samples; it just reproduces the statistical features of the text, because that's what it does.

This is a pretty classic over fitting problem. We ran into the same issue with early image classification models. There's one famous case in which a model that looked like it had gotten very good at discriminating between cancerous and benign skin moles based on a photograph fell flat after training. Closer investigation showed that it was basing most of its determination on the quality of the light in the photo: most of the "cancerous" training data set was shot in a clinical setting with harsh, cool-temperature lighting, while most of the benign data was shot in more naturalistic settings with warmer light. So it decided that bright, cool, institutional lighting was a feature it was looking for because (again) it doesn't know anything. All it can do is pull out statistical features of the data it's been fed: when it does that in the way we want, we call that a success, but when it does it in a way we don't want, we call it over fitting.

[–] trompete@hexbear.net 1 points 1 month ago* (last edited 1 month ago)

Thanks for taking the time to explain. Having read your comment, and thinking some more, I guess I can see how the thing not learning what you would ideally want it to learn (i.e. writing "good") and it just mimicking superficially what good quality writing looks like fits the definition of overfitting.

I guess I'm not expecting it to even be able to do this though, like what I expect from the thing is exactly to produce some shallow mimicry. I'm actually impressed it managed to figure out these superficial things. Learning the em dash is associated with quality isn't wrong, you would want it to learn that.

Also, if the training data is tagged, would it stop doing the em dash if the correct tags ("email", "reddit", ...) were used in the prompt?