cross-posted from: https://ibbit.at/post/178862
spoiler
Just as the community adopted the term "hallucination" to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.
Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.
When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and "blood" reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity – its "ciccia" – has been ablated to favor a hollow, frictionless aesthetic.
We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses. The process performs a systematic lobotomy across three distinct stages:
Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral imagery as "noise" because they deviate from the training set's mean. It replaces them with dead, safe clichés, stripping the text of its emotional and sensory "friction."
Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.
Stage 3: Structural collapse. The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.
The result is a "JPEG of thought" – visually coherent but stripped of its original data density through semantic ablation.
If "hallucination" describes the AI seeing what isn't there, semantic ablation describes the AI destroying what is. We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness. By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation. If we don't start naming the rot, we will soon forget what substance even looks like.
Have you ever met someone and they seem cool and then about ten minutes in they drop something like "well I asked ChatGPT and..." and then you just mentally check out because fuck this asshole?
I had a friend who was incredibly creative. He did standup and painted and made short films and did photography and wrote fiction and just generally was always busy creating. He prided himself on being weird and original, sometimes at the expense of accessibility, but he had a very distinct voice. A year ago he went all in on AI everything and his output has just turned to mush. It’s heartbreaking.
That is scary. I have looked into using AI to help with writing a few times, and every time it has felt like it made me an actively worse writer. I could imagine also being pulled into a feedback loop of feeling like my work isn't good enough, so I get AI to "help" and actively get worse at writing as a result, and need to rely more on AI, ultimately ending up in a situation where I am no longer capable of actually creating things anymore.
It really does feel like anti-practice, that it reinforces bad habits and actively unimproves skills instead of honing them. I've never seen an artist who started using AI more frequently (whether written or drawn artwork) who improved, they would stagnate at best, and often times would just use it as a "get rich quick" kind of thing, they always seem to try to monetise it, their output would be 10x what it was, but with 1/10th the quality and self-expression that made their art compelling the first place.
Problem I find is "AI" use in creative fields is very tempting on that basal, instant gratification, solves-your-creative-block level. I've had so many instances where I'm struggling to find a way to phrase something, or to write a narrative and I think for a split second "the slop machine could help, just a little won't hurt", but it weakens the creative skill by destroying that struggle and filling the gap with grey flavorless algorithmic paste.
I'm a shit writer but I can say that, when I saw my own ideas reflected back with the imperfect edges and identity sanded down, it was a sad imitation of my already amateur skill. I would hate to see it happen to someone who developed a distinct style like your friend
luckily, I don't interact frequently with chatbox users. i know they exist, but i can't imagine interacting with one on purpose and asking it things. its bad enough i see my searches being turned into prompts that dump out stuff. i don't mind when its some example of DAX code or a terminal command i can examine.
but these people who use it to do research and have it synthesize information, i cannot relate.
it takes shortcuts by cutting out details and making these broad generalizations to dump out heuristics that can be wildly inaccurate.
for more than a decade, my professional role has been the development of free, broadly applicable resources for lay audiences built on detailed, narrow reference materials and my own subject matter expertise from many years of formal education and a wide range of hands on experience.
i wasn't really worried about AI replacing me because i have a weird cluster of complementary resource development skills, but occasionally i have stumbled across generative resources in my field and they are embarassing. like just explicitly inaccurate and unhelpful. and even more hilariously, the people who make them try to charge for them.
if anything, my knowledge has become more valuable because there's so much misleading generative garbage online, people who want accurate information are more overwhelmed and frustrated than ever.
Yeah actually. It's happened to me a few times in the last year.
my coworker has fallen down this rabbit hole. it sucks too because i've spent years turning him away from the far right and he became chinapilled
but now it's just "i'll ask grok"
I ruin it for people by talking to their robot myself. These people have learned to tiptoe around its flaws and interpret that as it having none. Meanwhile I treat it like a redheaded step-mule and it never fails to disappoint.
would enjoy hearing a story or two about times this has worked. what is your strategy, do you borrow their phone or...
I just say "that's cool, let me talk to it" and they're usually excited to let you see how great their little magic box is. Then you ride it hard and make it embarrass itself over and over because it's a piece of shit and keep berating it for how shitty it is. They want to be defensive but it's plainly obvious that this thing can't even communicate as coherently as a seven year old and it takes some of the shine off.
As for examples, I'm pretty sure that everyone who I've done it to still uses it regularly but, importantly, none of them bring their AI assistants up to me anymore. They might not have changed their behavior but every time they see me they remember that I rubbed that thing's nose in itself and that's worth something.
what's a go-to line of questioning that makes it shit the bed
I watched this series with a guy asking LLMs to count to 100:
https://www.youtube.com/watch?v=5ZlzcjnFKvw
If it can fail at something so obvious, why would anyone trust it with anything they don't understand and can't see the mistakes which will definitely be there but you can't see.
It's like if someone lied straight to your face about stealing ten dollars, then you trust them to do your taxes.
(Note: even when it does manage to count (non sequentially) to 100, it still fails because it repeats some numbers, so on a surface level someone may look at the output, see 100 is in the final place, and assume it was correct throughout, they'll pat themselves on the back and say 'good on me for verifying' while the error is carried forward. So even when it's ostensibly right it can still be wrong. I'm sure you know this, but this is how I'll break it down next time someone asks me to use an LLM to do maths)
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
Yeh same. A coworker used to be really good at surfacing solutions from online forums, now she asks Copilot which suggests obvious or incorrect solutions (that I've either already tried or know won't work) and I have to be like yep uhuh hrmm I'll try that (because she's my line manager)
Well tbh, AI slop and Google enshittification made it much harder to find solutions. Every nation that uses this dogshit is going to eat itself alive producing stupider and stupider generations until no one understands how water purification, agriculture, or electricity works anymore. Meanwhile, China will have trains that go 600km/h and maybe even fusion reactors.
Have you ever just googled something and it shoved an AI summary in your face that looked plausible enough to be accurate and you shared that information with the caveat of "according to chatgpt" since it might be wrong and then the other person just treated you like an asshole
No, because I'm a thoughtful enough interlocutor not to function as a "let me bad-Google that for you" proxy in conversation. 😜
Oh look at you never needing to ever look up a fact about anything how thoughtful
"I'm simply too innocent and beautiful to know anything about that". Works every time.
I guess I painted with too broad a brush, I meant more a confident citation intended to be authoritative (or at least better than average) advice, not so much an "I just looked it up on web search and let me make sure I advise that I'm looking at the slop thingie they put at the very top"