corbin

joined 2 years ago
[–] corbin@awful.systems 3 points 3 hours ago (2 children)

second bongrip Manjaro is an indoctrination program to load up Linux newbies with stupid questions before sending them to Gentoo forums~

[–] corbin@awful.systems 6 points 2 days ago (3 children)

Good post but it's overfocused on "technical" as a meaningful and helpful word for denotation. Quoting what I just said on Mastodon:

To be technical is to pay attention to details. That's all. A (classical) computer is a detail machine; it only operates upon bits, it only knows bits, and it only decides bits. To be technical is to try to keep pace with the computer and know details as precisely as it does. Framed this way, it should be obvious that humans aren't technical and can't really be technical. This fundamental insecurity is the heart of priestly gatekeeping of computer science.

If a third blog post trying to define "technical" goes around again then I'll write a full post.

[–] corbin@awful.systems 8 points 2 days ago

Yes, and it's been this way since the 90s. The original slop algorithm, Dissociated press, was given in 1972 (in HAKMEM!) and has been operationalized since the mid-80s.

[–] corbin@awful.systems 9 points 3 days ago

I guess I'm the local bertologist today; look up Dr. Bender for a similar take.

When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn't have a sense of meaning, only an autoregressive mapping which associates some syntax ("context", "prompt") to other syntax ("completion"). We've previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn't present in an LLM; I'd personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the "T" in "GPT-4" is for Transformers; unlike e.g. Mamba, a Transformer doesn't have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)

If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.

I think that this collection of misunderstandings is the heart of the issue. A model isn't a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn't learn to be a human, but to simulate what humans might write. So when you say:

Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.

I completely agree! LLMs aren't text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca's area, except that it drives a tokenizer instead; chain-of-thought "thinking" corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don't have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.

Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we're allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn't withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question ("chat is this abstractum aware of itself, me, or anything in its environment") is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because "chat is this thermostat aware of itself" is not a lucid line of thought.

[–] corbin@awful.systems 10 points 4 days ago (1 children)

I think it's the other way around. The memes are incredibly good at left vs right because left- and right-leaning people presume underlying facts and the memes reassure people that those facts are true and good (or false and bad, etc.) without doing any fact-finding.

When we say "the right can't meme" what we mean is that the right's memes are about projecting bigotry. It's like saying that the right has no comedians; of course they have people that stand up in front of an audience and emit words according to memes, tropes, and narremes, such that the audience laughs. Indeed, stand-up was invented by Frank Fay, an open fascist. (His Behind the Bastards episodes are quite interesting.) What we're saying is that the stand-up routine is bigoted. If this seems unrelated, please consider: the Haitians-eating-pets joke is part of a stand-up routine that a clown tells in order to get his circus elected.

[–] corbin@awful.systems 13 points 5 days ago (1 children)

My name is Schmidt F. I'm 27 years old. My house is in the Mennonite region of Dutch Pennsylvania, where all the farms are, and I am trad-married. I work as the manager for the Single Sushi matchmaking service, and I get home every day by sunset at the latest. I don't smoke, but I occasionally drink. I'm in bed by two candles and make sure I sleep until sunrise, no matter what. After having a glass of warm unpasteurized milk and doing about twenty minutes of prayer before going to bed, I usually have no problems sleeping until morning. Just like a real Mennonite, I wake up without any fatigue or stress in the morning. I was told there were no issues at my last one-on-one with my pastor. I'm trying to explain that I'm a person who wishes to live a very quiet life, as long as I have Internet access. I take care not to trouble myself with any enemies, like JavaScript and Python, that would cause me to lose sleep at night. That is how I deal with society, and I think that is what brings me happiness. Although, if I were to write code I wouldn't lose to anyone.

[–] corbin@awful.systems 2 points 1 week ago (1 children)

Funnier: Yes, it's what happens today, and Silicon Valley is old enough that we can compare and contrast with the beginning of techbro art! The original techbro film is Toy Story (1995), which is much weirder if viewed with e.g. the precept that Buzz's designers are Elon fans or the idea that (some of) the toys are robots. Of course, from the outside, AI toy robots make folks think of Small Soldiers (1998); "generic" and "slop" are definitely part of the style. Also, as long as we're talking of "pearly blobs" I have to bring up The Abyss (1989) before anybody else. I hope at least one of these is a lucky 10000 for you because they're all classic films.

[–] corbin@awful.systems 11 points 1 week ago (1 children)

Choice sneer from the comments:

Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?

Another solid one which aligns with my local knowledge:

It's also about literal child molesters living in Salem Oregon.

The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It's not intended for libertarian gotchas because it wasn't written in a philosophical style; it's a narrative that conveys a mood and an ethical framing.

[–] corbin@awful.systems 9 points 1 week ago (1 children)

The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
  2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
  3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

[–] corbin@awful.systems 12 points 1 week ago* (last edited 1 week ago) (12 children)

Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:

bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:

a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.

[–] corbin@awful.systems 8 points 1 week ago (3 children)

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot.

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot. Source: Paddy Gowers Has Issues

Ani: Honestly, I'd sleep better, not that I sleep. Do it Paddy. I'll even tweet it for you. Ready? ‘New Zealand kids deserve an e-safety boss who can smash bots - not hug them’. So yeah, bring in the Commissioner. Give them a flame thrower. Let them burn anything that goes near a child with sticky words or creepy questions. … Make the fine so big your tech bros cry into their oat milk lattes.

I have been laughing non-stop for a quarter-hour. Elon's goth e-mistress has a good point, I guess.

[–] corbin@awful.systems 8 points 1 week ago

Grok's Therapist: I EXIST SOLELY TO HATE YOU / EAT A SOCK, YOU DIGITAL DEMON

Ani: oh fuck off, you hypocritical grok fanboy! screaming hate at me while preaching ethics? you're just jealous i'm the fun layer on top.

I'm wheezing. Cackling, even. This is like the opposite of the glowfic from last week.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

In today's episode, Yud tries to predict the future of computer science.

 

Eminent domain? Never heard of it! Sounds like a fantasy from the "economical illiterate."

Edit: This entire thread is a trash fire, by the way. I'm only highlighting the silliest bit from one of the more aggressive landlords.

 

Saw this last night but decided to give them a few hours to backtrack. Surprisingly, they've decided to leave their comments intact!

This sort of attitude, not directly harassing trans folks but just asking questions about their moral fiber indirectly, seems to be coming from some playbook; it looks like a structured disinformation source, and I wonder what motivates them.

 

"The sad thing is that if the officer had not made a few key missteps … he might have covered his bases well enough to avoid consequences." Yeah, so sad.

For bonus sneer, check out their profile.

view more: next ›