cross-posted from: https://ibbit.at/post/178862
spoiler
Just as the community adopted the term "hallucination" to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.
Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.
When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and "blood" reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity – its "ciccia" – has been ablated to favor a hollow, frictionless aesthetic.
We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses. The process performs a systematic lobotomy across three distinct stages:
Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral imagery as "noise" because they deviate from the training set's mean. It replaces them with dead, safe clichés, stripping the text of its emotional and sensory "friction."
Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.
Stage 3: Structural collapse. The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.
The result is a "JPEG of thought" – visually coherent but stripped of its original data density through semantic ablation.
If "hallucination" describes the AI seeing what isn't there, semantic ablation describes the AI destroying what is. We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness. By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation. If we don't start naming the rot, we will soon forget what substance even looks like.
Admittedly, I tried to give LLMs a real chance but all of them are just…so fucking cringe.
ChatGPT writes like Steven Universe decided to double down on patronizing. Gemini makes up words. Try to explain a point and ask it for criticism? It will describe anything it disagrees with as “the [x] trap.”
I gave up on their creative use pretty much after my first try. I saw people making rap lyrics, I was intrigued, then realized it was absolutely impossible to get it to write anything besides a flow like "we do it like this / then do it like that / all those other guys are just wickedy wack" sort of cheesy-ass style. This was GPT 3.5 I think, I tried later ones and it was no better at all.
I'm not too worried about it replacing real art, the commercial 'creative' jobs like advertising music or illustrators are probably already being replaced, but even that style of art done by 'AI' is just so irritating to me and usually has some indefinable thing about it that makes it feel bad to look at versus actual illustrating.
I can't use any of them because the way they pretend to be people instead of apps/tools pisses me off.
I basically have to "preprompt" any prompt with "answer all following questions with the following format" and it's a massive list of what I specify AI can and cannot do. I have an entire section to get rid of its obnoxious attempts at passing for a human with personhood (do not use emojis, do not directly address me, do not be cordial, do not be polite, do not be friendly, do not answer in complete sentences). There's also a section on getting rid of obnoxious AI-isms (do not use em-dashes, do not use the following words which is a long list of words overly used by AI, do not use the words no, not, but which is there so the prompt doesn't use it's not x it's y).
The preprompt got too long for AI, so I had to dump it into a txt file and make AI read it before I would even want to use AI. And even then, I still have little use for AI lmao. But I guess "making AI not suck so hard" was a fun creative exercise.
My sole use for chatgpt is to generate lists to brainstorm.
You can prompt it to stop doing these things if you notice it, and it will generally work. Quite useful if something pisses you off about its output.