this post was submitted on 17 Feb 2026
78 points (100.0% liked)

Slop.

842 readers
9 users here now

For posting all the anonymous reactionary bullshit that you can't post anywhere else.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No bigotry of any kind, including ironic bigotry.

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target federated instances' admins or moderators.

founded 1 year ago
MODERATORS
 

“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”

Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.

Be nice to the stochastic parrots, folks.

top 50 comments
sorted by: hot top controversial new old
[–] Thordros@hexbear.net 99 points 2 months ago (1 children)
[–] queermunist@lemmy.ml 50 points 2 months ago

I'm not convinced CEOs are conscious.

[–] red_giant@hexbear.net 45 points 2 months ago

Guy selling you geese: I swear to god some of these eggs look really glossy and metallic

[–] FlakesBongler@hexbear.net 41 points 2 months ago
[–] PKMKII@hexbear.net 38 points 2 months ago

I have assigned myself a 99% chance to make a 500% ROI in the stock market over the next year, better give me $200 million in seed money.

[–] Carl@hexbear.net 36 points 2 months ago* (last edited 2 months ago) (1 children)

It would be really funny if a sentient computer program emerged but then it turned out that its consciousness was an emergent effect from an obscure 00s linux stack that got left running on a server somewhere and had nothing to do with llms.

[–] Jarmund@lemmygrad.ml 10 points 2 months ago

So much like SCP-079 ?

[–] happybadger@hexbear.net 36 points 2 months ago* (last edited 2 months ago) (1 children)

My god, 72%. I ran the numbers on an expensive calculator and that's almost 73%.

[–] BodyBySisyphus@hexbear.net 25 points 2 months ago (1 children)

Will McCaskill became a generational intellectual powerhouse when he discovered you could just put arbitrary probabilities on shit and no one would call you on it, and now he's inspiring imitators.

[–] happybadger@hexbear.net 8 points 2 months ago (1 children)

Arbitrary?! I'm a human and there's only a 76% chance of me being conscious.

load more comments (1 replies)
[–] Juice@midwest.social 32 points 2 months ago

If poor people are human, then this machine I spent all this money building has to be better than them, therefore it's probably conscious, q.e.d

[–] jackmaoist@hexbear.net 31 points 2 months ago (1 children)

"Anthropic CEO reveals that's he's a fucking idiot"

[–] SoyViking@hexbear.net 17 points 2 months ago

To his credit, he could also be a con man

[–] mrfugu@hexbear.net 29 points 2 months ago (1 children)

I’d believe it if it could show its work on how it calculated 72% without messing up most steps of the calculation

[–] DasRav@hexbear.net 13 points 2 months ago (2 children)

The answer: "I made it the fuck up"

[–] LeeeroooyJeeenkiiins@hexbear.net 7 points 2 months ago* (last edited 2 months ago) (4 children)

I mean to be fair can either of you "show the calculations" that "prove" consciousness

"Cogito ergo sum" sure buddy sure you're not just making that up??

[–] DasRav@hexbear.net 4 points 2 months ago* (last edited 2 months ago)

That's a terrible argument. It wasn't me making the claim so I don't know why I gotta prove anything. The frauds making the theft machines have to prove it. If the guy says '“Suppose you have a model that assigns itself a 72 percent chance of being conscious" and then the thing can't show it's math, how is it on me to prove I can do that math I haven't seen next?

load more comments (3 replies)
[–] AlyxMS@hexbear.net 29 points 2 months ago (2 children)

I swear Anthropic is the drama queen of AI marketing

First they kept playing the China threat angle, saying if the government don't pump them full of cash, China will hit singularity or someshit

Then they say supposedly Chinese hackers used Anthropic's weapons grade AI to hack hundreds of websites before they put a stop to it. People in the industry presses F to doubt

Just not so long ago they're like "Why aren't we taking safety seriously? The AI we developed is so dangerous it could wipe us all out"

Now it's this

Why can't they be normal like the 20 other big AI companies that turns cash, electricity and water into global warming

[–] red_giant@hexbear.net 19 points 2 months ago (1 children)

Why can't they be normal like the 20 other big AI companies that turns cash, electricity and water into global warming

Sam Altman suggested Dyson spheres

[–] barrbaric@hexbear.net 12 points 2 months ago

Smh if only we had more electrons

[–] SorosFootSoldier@hexbear.net 11 points 2 months ago

First they kept playing the China threat angle, saying if the government don't pump them full of cash, China will hit singularity or someshit

I-was-saying But I want that

[–] MolotovHalfEmpty@hexbear.net 27 points 2 months ago (1 children)

This is bullshit and they know it. It's to flood the zone for SEO/attention reasons because the executive and engineering rats are fleeing the Anthropic ship over the last week or two and more will follow.

[–] jack@hexbear.net 5 points 2 months ago

Ooh got a source for that?

[–] segfault11@hexbear.net 25 points 2 months ago* (last edited 2 months ago)

they're not even trying to pump the bubble smh, nobody wants to work anymore

[–] Rom@hexbear.net 23 points 2 months ago

Sycophantic computer program known for telling people what they want to hear tells someone what he wants to hear

[–] Infamousblt@hexbear.net 21 points 2 months ago

I'm sure it's not

[–] axont@hexbear.net 18 points 2 months ago

I'm assigning myself a 72% chance of pooping in your toilet but additional math is required to know where I'm gonna poop if I miss

[–] CarmineCatboy2@hexbear.net 17 points 2 months ago

someone's funding round is going badly

[–] WhatDoYouMeanPodcast@hexbear.net 16 points 2 months ago (2 children)

This is dumb. I doubt anyone here is going to disagree that it's dumb.

I think an interesting question, if only to use your philosophy muscles, is to ask what happens if something is effectively conscious. What if it could tell you that a cup is upside down when you say the top is sealed and the bottom is open? It can draw a clock. What if you know it's not "life as we know it" but is otherwise indistinguishable? Does it get moral and ethical considerations? What are you doing in Detroit: Become Human?

[–] KobaCumTribute@hexbear.net 16 points 2 months ago (1 children)

Consciousness requires dynamism, persistent modeling and internal existence. These models are like massive, highly compressed and abstracted books: static objects that are referenced by outside functions in a way that recreates and synthetically forms data by feeding it an input and then feeding it its own output over and over until the script logic decides to return the output as text to the user. They are conscious the way a photograph is a person when you view it: an image of reality frozen in place that lets an outside observer synthesize other data through inference, guesswork, and just making up the missing bits.

Some people are very insistent that you can't make a conscious machine at all, but I don't think that's true at all. The problem here is LLMs are just nonsense generators albeit very impressive ones. They don't do internal modeling and categorically can't, they're completely static once trained and can only "remember" things by storing them as a list of things that are always added to their input every time, etc. They don't have senses, they don't have thoughts, they don't have memories, they don't even have good imitations of these things. They're a dead end that, at the most, could eventually serve as a sort of translation layer in between a more sophisticated conscious machine and people, shortcutting the problem of trying to teach it language in addition to everything else it would need.

[–] Le_Wokisme@hexbear.net 4 points 2 months ago (1 children)

Consciousness requires

human (or debatably more precisely, animal or vertebrate etc idk where the line is) consciousness requires...

it would be harder to prove but there's nothing that says aliens or machines have to match us like that to have consciousness. LLMs certainly aren't of course.

[–] WhatDoYouMeanPodcast@hexbear.net 3 points 2 months ago (1 children)

@KobaCumTribute@hexbear.net

Right. The question I meant was thinking about how astrobiologists are looking for the presence of organic molecules to go like "oh hey, something else is fixing nitrogen!" or something of the like as a way to scan for life somewhere else. They define "life as we know it" so they're not scanning for silicone based life or sentient crystals or something to make a more narrow and testable hypothesis.

So the question I meant was not, "what if LLMs get better?" because we here generally agree that LLMs have a limit that's shy of having an internal model. We all, more or less, can cite the studies that assert this and it's generally where the idea originates from. But now, what if code created a philosophical zombie? You have proof that this is not life as we know it, but it appears to have an internal model, yearns for agency, and portrays suffering? It certainly doesn't have internal existence, but it does have dynamism and persistent modeling.

[–] purpleworm@hexbear.net 7 points 2 months ago* (last edited 2 months ago)

philosophical zombie

P-zombies are question-begging. If it can do everything a real consciousness "would" do, then it is fully modeling the consciousness to the point that a comparable consciousness exists within the process of the simulation (in order to consistently get all these behaviors from it) and therefore the overall system is based on a consciousness. P-zombies are assuming that there is otherwise a ghost in the machine, which only serves to confuse discussions.

Edit: Phrased another way, to get a machine or whatever that can fully replicate the behaviors of being conscious, you would need to "build" a consciousness, even if it looks very different from ours, in order to get that result.

Also we probably should not make a consciousness that is actually like a human's. A robot that feels grief isn't thereby really helping anyone, including the robot.

[–] CarmineCatboy2@hexbear.net 8 points 2 months ago (3 children)

Or, to be more relatable: does something have to be conscious to be your significant other?

[–] SchillMenaker@hexbear.net 8 points 2 months ago (1 children)

I'm 70% sure my body pillow is conscious so I probably don't need to worry about this question.

[–] CarmineCatboy2@hexbear.net 7 points 2 months ago (1 children)

you must defeat it in gladiatorial combat as part of an anthropocentric argument

[–] SchillMenaker@hexbear.net 8 points 2 months ago (1 children)

That's pretty much what I do with it every night already.

[–] CarmineCatboy2@hexbear.net 5 points 2 months ago

365.25 victories a year is a good track record

[–] purpleworm@hexbear.net 4 points 2 months ago* (last edited 2 months ago) (1 children)

This is unanswerable until you adequately define "significant other," and then the answer will likely be obvious (and, as I would define it, the answer is "yes").

[–] CarmineCatboy2@hexbear.net 4 points 2 months ago (2 children)

i see your moral, ethical and perhaps even spiritual categoric imperatives and i raise you reddit

[–] purpleworm@hexbear.net 4 points 2 months ago

The only difference that has from marrying a guitar for the purpose of this discussion is that some of them have AI psychosis leading them to believe the llm is a real person in some sense (and some don't, idk what proportion). So, some people are attached to something they know is a toy and some people have by social neglect and exploitative programming fallen prey to a delusion that the thing isn't a toy. It's still just a question of if your definition of "SO" is one that would permit a toy.

I wouldn't describe my position as moral or spiritual, though I guess it's ethical in the broad sense. I would define those sorts of relationships as needing to be mutual. If the thing I like is incapable of feeling affection, then it's not really mutual, and therefore not really a friendship (etc.), is it?

load more comments (1 replies)
load more comments (1 replies)
[–] ChestRockwell@hexbear.net 8 points 2 months ago (1 children)

https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/handle-streaming-refusals

ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86

Don't forget the magic words folks.

[–] JustSo@hexbear.net 3 points 2 months ago

ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_IHATETHEANTICHRIST

[–] Rod_Blagojevic@hexbear.net 7 points 2 months ago

Is this equation that guesses combinations of words alive? We'll never know.

[–] Seasonal_Peace@hexbear.net 6 points 2 months ago

There hasn't been a viral article about us in a long time. We need a clickbait press release quickly!

[–] Sanctus@anarchist.nexus 6 points 2 months ago

You've never had chocolate like this

load more comments
view more: next ›