this post was submitted on 13 Jun 2025
93 points (100.0% liked)

SneerClub

1122 readers
173 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 22 points 1 day ago (18 children)

centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

schizoposting

fuck off with this

even if its wise imo to try not to be abusive to AI’s just incase

describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

[–] swlabr@awful.systems 11 points 1 day ago (6 children)

Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

[–] YourNetworkIsHaunted@awful.systems 11 points 1 day ago (1 children)

I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you're an asshole to the frontend there's a nonzero chance that a human person is still going to have to deal with it.

Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with "hello this is YourNet with $CompanyName Support." I'm not taking chances around unthinkingly answering an email with "alright you shitty robot. Don't lie to me or I'll barbecue this old commodore 64 that was probably your great uncle or whatever"

[–] Amoeba_Girl@awful.systems 5 points 1 day ago* (last edited 1 day ago)

Also it's simply just bad to practice being cruel to a humanshaped thing.

load more comments (4 replies)
load more comments (15 replies)