this post was submitted on 16 Feb 2026
99 points (99.0% liked)

technology

24249 readers
593 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

cross-posted from: https://ibbit.at/post/178862

spoilerJust as the community adopted the term "hallucination" to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.

Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.

When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and "blood" reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity – its "ciccia" – has been ablated to favor a hollow, frictionless aesthetic.

We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses. The process performs a systematic lobotomy across three distinct stages:

Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral imagery as "noise" because they deviate from the training set's mean. It replaces them with dead, safe clichés, stripping the text of its emotional and sensory "friction."

Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.

Stage 3: Structural collapse. The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.

The result is a "JPEG of thought" – visually coherent but stripped of its original data density through semantic ablation.

If "hallucination" describes the AI seeing what isn't there, semantic ablation describes the AI destroying what is. We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness. By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation. If we don't start naming the rot, we will soon forget what substance even looks like.

top 50 comments
sorted by: hot top controversial new old
[–] Self_Sealing_Stem_Bolt@hexbear.net 19 points 1 day ago (1 children)

This is made worse because of how illiterate westerners are too. If you can't edit the output of a chat bot, you can't tell how shit the output is. Its like when you see a social media post and its clearly written by ai cause theres incomplete sentences, weird capitalizations, the over use of lists that could just be items separated by commas, blatantly incorrect imformation etc. Its maddening. I've received emails from new businesses trying to put themselves out there and its all ai slop. Theres a race to the bottom in our societies. Who can be the most lazy; who can think the least; who can put in the least amount of effort and still get everything they want. Its like those studies where they put people in an empty room, theres nothing but a table, chair, and a button on the table. The button shocks you. And people will sit there the whole time shocking themselves instead of being alone with their thoughts. Why are westerners, or maybe this is a global phenomenon, so afraid of their own minds, thoughts, feelings, boredom? Do people really just want to be little pleasure piggies? Press button gimme slop. Do people not like learning? Cause thats sad if they don't.

[–] SuperZutsuki@hexbear.net 6 points 12 hours ago* (last edited 12 hours ago)

They don't like learning because at some point in their past, learning got them in trouble, either with a bully in school or some authority figure. Anti-intellectualism is the dogma of American secular religion and it is strictly enforced by its adherents.

[–] miz@hexbear.net 63 points 1 day ago (4 children)

Have you ever met someone and they seem cool and then about ten minutes in they drop something like "well I asked ChatGPT and..." and then you just mentally check out because fuck this asshole?

[–] MeetMeAtTheMovies@hexbear.net 37 points 1 day ago (2 children)

I had a friend who was incredibly creative. He did standup and painted and made short films and did photography and wrote fiction and just generally was always busy creating. He prided himself on being weird and original, sometimes at the expense of accessibility, but he had a very distinct voice. A year ago he went all in on AI everything and his output has just turned to mush. It’s heartbreaking.

[–] Damarcusart@hexbear.net 9 points 21 hours ago

A year ago he went all in on AI everything and his output has just turned to mush.

That is scary. I have looked into using AI to help with writing a few times, and every time it has felt like it made me an actively worse writer. I could imagine also being pulled into a feedback loop of feeling like my work isn't good enough, so I get AI to "help" and actively get worse at writing as a result, and need to rely more on AI, ultimately ending up in a situation where I am no longer capable of actually creating things anymore.

It really does feel like anti-practice, that it reinforces bad habits and actively unimproves skills instead of honing them. I've never seen an artist who started using AI more frequently (whether written or drawn artwork) who improved, they would stagnate at best, and often times would just use it as a "get rich quick" kind of thing, they always seem to try to monetise it, their output would be 10x what it was, but with 1/10th the quality and self-expression that made their art compelling the first place.

[–] Frivolous_Beatnik@hexbear.net 16 points 1 day ago

Problem I find is "AI" use in creative fields is very tempting on that basal, instant gratification, solves-your-creative-block level. I've had so many instances where I'm struggling to find a way to phrase something, or to write a narrative and I think for a split second "the slop machine could help, just a little won't hurt", but it weakens the creative skill by destroying that struggle and filling the gap with grey flavorless algorithmic paste.

I'm a shit writer but I can say that, when I saw my own ideas reflected back with the imperfect edges and identity sanded down, it was a sad imitation of my already amateur skill. I would hate to see it happen to someone who developed a distinct style like your friend

[–] came_apart_at_Kmart@hexbear.net 10 points 23 hours ago

luckily, I don't interact frequently with chatbox users. i know they exist, but i can't imagine interacting with one on purpose and asking it things. its bad enough i see my searches being turned into prompts that dump out stuff. i don't mind when its some example of DAX code or a terminal command i can examine.

but these people who use it to do research and have it synthesize information, i cannot relate.

it takes shortcuts by cutting out details and making these broad generalizations to dump out heuristics that can be wildly inaccurate.

for more than a decade, my professional role has been the development of free, broadly applicable resources for lay audiences built on detailed, narrow reference materials and my own subject matter expertise from many years of formal education and a wide range of hands on experience.

i wasn't really worried about AI replacing me because i have a weird cluster of complementary resource development skills, but occasionally i have stumbled across generative resources in my field and they are embarassing. like just explicitly inaccurate and unhelpful. and even more hilariously, the people who make them try to charge for them.

if anything, my knowledge has become more valuable because there's so much misleading generative garbage online, people who want accurate information are more overwhelmed and frustrated than ever.

[–] LeeeroooyJeeenkiiins@hexbear.net 14 points 1 day ago (2 children)

Have you ever just googled something and it shoved an AI summary in your face that looked plausible enough to be accurate and you shared that information with the caveat of "according to chatgpt" since it might be wrong and then the other person just treated you like an asshole

[–] Speaker@hexbear.net 5 points 21 hours ago (1 children)

No, because I'm a thoughtful enough interlocutor not to function as a "let me bad-Google that for you" proxy in conversation. 😜

[–] LeeeroooyJeeenkiiins@hexbear.net 3 points 20 hours ago (1 children)

Oh look at you never needing to ever look up a fact about anything how thoughtful

[–] Speaker@hexbear.net 5 points 20 hours ago

"I'm simply too innocent and beautiful to know anything about that". Works every time.

[–] miz@hexbear.net 12 points 1 day ago* (last edited 1 day ago)

I guess I painted with too broad a brush, I meant more a confident citation intended to be authoritative (or at least better than average) advice, not so much an "I just looked it up on web search and let me make sure I advise that I'm looking at the slop thingie they put at the very top"

[–] Flyberius@hexbear.net 19 points 1 day ago (1 children)

Yeah actually. It's happened to me a few times in the last year.

[–] Des@hexbear.net 17 points 1 day ago (2 children)

my coworker has fallen down this rabbit hole. it sucks too because i've spent years turning him away from the far right and he became chinapilled

but now it's just "i'll ask grok" stalin-stressed

[–] SchillMenaker@hexbear.net 4 points 20 hours ago (1 children)

I ruin it for people by talking to their robot myself. These people have learned to tiptoe around its flaws and interpret that as it having none. Meanwhile I treat it like a redheaded step-mule and it never fails to disappoint.

[–] miz@hexbear.net 4 points 20 hours ago (1 children)

would enjoy hearing a story or two about times this has worked. what is your strategy, do you borrow their phone or...

[–] SchillMenaker@hexbear.net 6 points 19 hours ago (1 children)

I just say "that's cool, let me talk to it" and they're usually excited to let you see how great their little magic box is. Then you ride it hard and make it embarrass itself over and over because it's a piece of shit and keep berating it for how shitty it is. They want to be defensive but it's plainly obvious that this thing can't even communicate as coherently as a seven year old and it takes some of the shine off.

As for examples, I'm pretty sure that everyone who I've done it to still uses it regularly but, importantly, none of them bring their AI assistants up to me anymore. They might not have changed their behavior but every time they see me they remember that I rubbed that thing's nose in itself and that's worth something.

[–] miz@hexbear.net 6 points 19 hours ago* (last edited 19 hours ago)

what's a go-to line of questioning that makes it shit the bed

[–] KuroXppi@hexbear.net 4 points 1 day ago (1 children)

Yeh same. A coworker used to be really good at surfacing solutions from online forums, now she asks Copilot which suggests obvious or incorrect solutions (that I've either already tried or know won't work) and I have to be like yep uhuh hrmm I'll try that (because she's my line manager)

[–] SuperZutsuki@hexbear.net 3 points 12 hours ago* (last edited 12 hours ago)

Well tbh, AI slop and Google enshittification made it much harder to find solutions. Every nation that uses this dogshit is going to eat itself alive producing stupider and stupider generations until no one understands how water purification, agriculture, or electricity works anymore. Meanwhile, China will have trains that go 600km/h and maybe even fusion reactors.

[–] Euergetes@hexbear.net 18 points 1 day ago

An AI could never find a way to stick the stale grains of a bit into the heap of every fucking post garf-chan

[–] happybadger@hexbear.net 41 points 1 day ago (1 children)

It's short and this writer seems to be the one who coined the term, but I'm reposting it out of the aggregator instance because it's a really good term for something I didn't have a word for before. Something about AI writing even when the tell-tale signs are removed really stands out to me. When Walter Benjamin was studying the same kind of phenomenon with art in the 1930s, he described it as the cultic significance of a work that's lost when we industrially reproduce it. The individual oil painting is a museum exhibit or family heirloom, the Thomas Kinkade print is a single-serving plastic food container that hides empty wall space. Every LLM could write a thousand novels a second for a thousand years and none of them would be worth reading because there's no imagination behind them.

I like how it's technically represented here in simplifying processes.

Yeah, you can tell when something is ai cause its soulless. People who aren't creatives love this shit cause they never really engaged with art to begin with, it was always a commodity to hang on the wall or put on the bookshelf. Creatives cringe at ai "art" cause its not creative at all.

[–] FortifiedAttack@hexbear.net 18 points 1 day ago (2 children)

I don't really see what's more dangerous about this than what the business world has already been doing since long before AI. Everything is standardized, minimalist, and everyone is following this or that trend. Creativity was already actively discouraged in favor of following strict guidelines on how to do things. And AI is perfectly adequate to achieve this.

[–] happybadger@hexbear.net 23 points 1 day ago* (last edited 1 day ago)

Certainly, but prior to AI my neo-Luddite enemy was the business world. Corporate Memphis was the thing I attacked before image generators. It's a malignant outgrowth of the same demonic trend that compounds the Hapsburg imagery by treating those Corporate Memphis simulacra as art.

[–] chgxvjh@hexbear.net 10 points 1 day ago

Charlie Stress called corporations AI 8 years ago https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future

[–] AlfalFaFail@lemmy.ml 20 points 1 day ago (4 children)
[–] BeanisBrain@hexbear.net 6 points 21 hours ago

The "ChatGPT" accusation also gets leveled at autistic people fairly often.

[–] happybadger@hexbear.net 18 points 1 day ago* (last edited 1 day ago)

I really like that parallel between formal academic English with its socioeconomic dimensions and algorithmically-generated English. To me there's a certain point where speaking a language becomes singing it. When I actually give a shit about how I'm writing, I think in terms of rhythm with the structure and melody with the word choice. There's a proper sense of consonance and dissonance in the way early 20th century composers used it. Even though I know French/Spanish/Romanian vocabulary and can functionally get around in countries that speak those languages, there's no way I could speak or write musically in them. If I know the strictest Academie Francaise standards for French it teaches me nothing about how to write poetically and I would always stand out from a single incorrect word unless I spent decades learning the nuances of the language in France. ESL speech patterns also really stand out to me as an externally reinforced rather than internally generated style.

[–] mickey@hexbear.net 8 points 1 day ago

I like this, I relate to this from the opposite side of the spectrum; when I've tried to relate e.g. a series of events as a story on here, it is very dry and precise because I want it to be as clear as possible. LLMs don't really write that way because they are meant to mimic human writing I suppose, but I can sound very terse and robotic.

[–] invalidusernamelol@hexbear.net 2 points 18 hours ago

It doesn't help they RLHF was largely done by educated people in the "former" colonies for a pittance.

[–] DragonBallZinn@hexbear.net 33 points 1 day ago (4 children)

Admittedly, I tried to give LLMs a real chance but all of them are just…so fucking cringe.

ChatGPT writes like Steven Universe decided to double down on patronizing. Gemini makes up words. Try to explain a point and ask it for criticism? It will describe anything it disagrees with as “the [x] trap.”

[–] Awoo@hexbear.net 9 points 1 day ago (1 children)

I can't use any of them because the way they pretend to be people instead of apps/tools pisses me off.

[–] AssortedBiscuits@hexbear.net 10 points 19 hours ago

I basically have to "preprompt" any prompt with "answer all following questions with the following format" and it's a massive list of what I specify AI can and cannot do. I have an entire section to get rid of its obnoxious attempts at passing for a human with personhood (do not use emojis, do not directly address me, do not be cordial, do not be polite, do not be friendly, do not answer in complete sentences). There's also a section on getting rid of obnoxious AI-isms (do not use em-dashes, do not use the following words which is a long list of words overly used by AI, do not use the words no, not, but which is there so the prompt doesn't use it's not x it's y).

The preprompt got too long for AI, so I had to dump it into a txt file and make AI read it before I would even want to use AI. And even then, I still have little use for AI lmao. But I guess "making AI not suck so hard" was a fun creative exercise.

[–] tocopherol@hexbear.net 16 points 1 day ago* (last edited 1 day ago)

I gave up on their creative use pretty much after my first try. I saw people making rap lyrics, I was intrigued, then realized it was absolutely impossible to get it to write anything besides a flow like "we do it like this / then do it like that / all those other guys are just wickedy wack" sort of cheesy-ass style. This was GPT 3.5 I think, I tried later ones and it was no better at all.

I'm not too worried about it replacing real art, the commercial 'creative' jobs like advertising music or illustrators are probably already being replaced, but even that style of art done by 'AI' is just so irritating to me and usually has some indefinable thing about it that makes it feel bad to look at versus actual illustrating.

load more comments (2 replies)
[–] SoyViking@hexbear.net 9 points 1 day ago (2 children)

I ran the article through ChatGPT five times. It should be super-improved by now:

CW: AI slop

Here is a refined version that preserves your argument while tightening cadence, sharpening conceptual clarity, and reducing minor redundancies:


Semantic Ablation: Why AI Writing Is Boring — and Potentially Dangerous

The AI community coined hallucination to describe additive error — moments when a model fabricates what was never present. We lack a parallel term for its quieter, more insidious opposite: semantic ablation.

Semantic ablation is the algorithmic erosion of high-entropy meaning. It is not a malfunction but a structural consequence of probabilistic decoding and reinforcement learning from human feedback (RLHF). Where hallucination invents, semantic ablation subtracts. It removes precisely what carries the greatest informational weight.

In the act of “refinement,” a model gravitates toward the statistical center of its distribution. Rare, high-precision tokens — those inhabiting the long tail — are replaced with safer, more probable alternatives. Safety and helpfulness tuning intensify this centripetal pull, penalizing friction and rewarding fluency. The result is not falsehood but attenuation: low perplexity purchased at the cost of semantic density.

When an author asks AI to “polish” a draft, the apparent improvement is often compression. High-entropy clusters — loci of originality, tension, or conceptual risk — are smoothed into statistically reliable phrasing. A jagged Romanesque vault becomes a polished Baroque façade of molded plastic: immaculate in finish, hollow in load-bearing strength. The surface gleams; the structure no longer carries weight.

Semantic ablation can be understood as entropy decay. Pass a text through successive AI refinements and its informational variance contracts. Vocabulary diversity narrows. Type–token ratios decline. Syntactic range constricts. The process typically unfolds in three stages:


Stage I: Metaphoric Cleansing

Unconventional metaphors and vivid imagery deviate from distributional norms and are treated as noise. They are replaced with familiar constructions. Emotional friction is sterilized.

Stage II: Lexical Flattening

Specialized terminology and high-precision diction yield to common synonyms in the name of accessibility. A one-in-ten-thousand word becomes a one-in-one-hundred substitute. Semantic mass diminishes; specificity thins.

Stage III: Structural Convergence

Nonlinear reasoning and idiosyncratic argumentative architecture are coerced into predictable templates. Subtext is over-explained or erased. Ambiguity is prematurely resolved. The prose becomes syntactically impeccable yet intellectually inert.


The finished product resembles a JPEG of thought: coherent at a glance, depth stripped away by compression.

If hallucination is the model perceiving what does not exist, semantic ablation is the model erasing what does. The danger is not merely aesthetic monotony but epistemic smoothing. As refinement is outsourced to systems optimized for statistical centrality, discourse drifts toward the median. Originality becomes an outlier. Complexity dissolves into algorithmic smoothness.

If we fail to name this process, we risk acclimating to it. And once acclimated, we may forget what uncompressed thought feels like.

[–] starweasel@hexbear.net 7 points 23 hours ago

thanks i hate it

i think this process is exactly what makes me so mad about ai generated slop, it reads like fucking conservapedia (which itself reads like basically anything written by a fundie ever, in my personal experience).

[–] BountifulEggnog@hexbear.net 7 points 1 day ago

Grok please summarize this too many word

[–] JDvecna@hexbear.net 14 points 1 day ago (2 children)

Good article, thank for share. Would have loved to see an exemplar text excerpt go through "refinement" to prove the author's point

[–] catter@hexbear.net 14 points 1 day ago (1 children)

This and some citations would've made this a really valuable article. Hopefully this idea will get refined a bit with better support.

[–] Carl@hexbear.net 17 points 1 day ago (1 children)

@grok improve this article with some citations and examples

[–] catter@hexbear.net 7 points 1 day ago

@grok present this in podcast form

load more comments (1 replies)
load more comments
view more: next ›