corbin

joined 2 years ago
[–] corbin@awful.systems 7 points 2 days ago (1 children)

From this post, it looks like we have reached the section of the Gibson novel where the public cloud machines respond to attacks with self-repair. Utterly hilarious to read the same sysadmin snark-reply five times, though.

[–] corbin@awful.systems 7 points 3 days ago (1 children)

Yes and yes. I want to stress that Yud's got more of what we call an incubator of cults; in addition to the Zizians, they also are responsible for incubating the principals of (the principals of) the now-defunct FTX/Alameda Research group, who devolved into a financial-fraud cult. Previously, on Awful, we started digging into the finances of those intermediate groups as well, just for funsies.

[–] corbin@awful.systems 7 points 3 days ago (2 children)

I know what it says and it's commonly misused. Aumann's Agreement says that if two people disagree on a conclusion then either they disagree on the reasoning or the premises. It's trivial in formal logic, but hard to prove in Bayesian game theory, so of course the Bayesians treat it as some grand insight rather than a basic fact. That said, I don't know what that LW post is talking about and I don't want to think about it, which means that I might disagree with people about the conclusion of that post~

[–] corbin@awful.systems 26 points 3 days ago (1 children)

Okay guys, I rolled my character. His name is Traveliezer Interdimensky and he has 18 INT (19 on skill checks, see my sheet.) He's a breeding stud who can handle twenty women at once despite having only 10 STR and CON. I was thinking that we'd start with Interdimensky trapped in Hell where he's forced to breed with all these beautiful women and get them pregnant, and the rest of the party is like outside or whatever, they don't have to go rescue me, I mean rescue him. Anyway I wanted to numerically quantify how much Hell wants me, I mean him, to stay and breed all these beautiful women, because that's something they'd totally do.

[–] corbin@awful.systems 9 points 3 days ago (2 children)

Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled "Artificial Superintelligence Must Be Illegal." Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he's no longer in that sort of jocular mood; he doesn't trust his waifu anymore.

[–] corbin@awful.systems 5 points 6 days ago (6 children)

Nah, it's just one guy, and he is so angry about how he is being treated on Lobsters. First there was this satire post making fun of Gas Town. Then there was our one guy's post and it's not doing super-well. Finally, there's this analysis of Gas Town's structure which I shared specifically for the purpose of writing a comment explaining why Gas Town can't possibly do what it's supposed to do. My conclusion is sneer enough, I think:

When we strip away the LLMs, the underlying structure [of Gas Town] can be mapped to a standard process-supervision tree rather than some new LLM-invented object.

I think it's worth pointing out that our guy is crashing out primarily because of this post about integrating with Bluesky, where he fails to talk down to a woman who is trying to use an open-source system as documented. You have to keep in mind that Lobsters is the Polite Garden Party and we have to constantly temper our words in order to be acceptable there. Our guy doesn't have the constitution for that.

[–] corbin@awful.systems 9 points 1 week ago

I don't think we discussed the original article previously. Best sneer comes from Slashdot this time, I think; quoting this comment:

I've been doing research for close to 50 years. I've never seen a situation where, if you wipe out 2 years work, it takes anything close to 2 years to recapitulate it. Actually, I don't even understand how this could happen to a plant scientist. Was all the data in one document? Did ChatGPT kill his plants? Are there no notebooks where the data is recorded?

They go on to say that Bucher is a bad scientist, which I think is unfair; perhaps he is a spectacular botanist and an average computer user.

[–] corbin@awful.systems 5 points 1 week ago (1 children)

Picking a few that I haven't read but where I've researched the foundations, let's have a party platter of sneers:

  • #8 is a complaint that it's so difficult for a private organization to approach the anti-harassment principles of the 1965 Civil Rights Act and Higher Education Act, which broadly say that women have the right to not be sexually harassed by schools, social clubs, or employers.
  • #9 is an attempt to reinvent skepticism from ~~Yud's ramblings~~ first principles.
  • #11 is a dialogue with no dialectic point; it is full of cult memes and the comments are full of cult replies.
  • #25 is a high-school introduction to dimensional analysis.
  • #36 violates the PBR theorem by attaching epistemic baggage to an Everettian wavefunction.
  • #38 is a short helper for understanding Bayes' theorem. The reviewer points out that Rationalists pay lots of lip service to Bayes but usually don't use probability. Nobody in the thread realizes that there is a semiring which formalizes arithmetic on nines.
  • #39 is an exercise in drawing fractals. It is cosplaying as interpretability research, but it's actually graduate-level chaos theory. It's only eligible for Final Voting because it was self-reviewed!
  • #45 is also self-reviewed. It is an also-ran proposal for a company like OpenAI or Anthropic to train a chatbot.
  • #47 is a rediscovery of the concept of bootstrapping. Notably, they never realize that bootstrapping occurs because self-replication is a fixed point in a certain evolutionary space, which is exactly the kind of cross-disciplinary bonghit that LW is supposed to foster.
[–] corbin@awful.systems 7 points 1 week ago (1 children)

The classic ancestor to Mario Party, So Long Sucker, has been vibecoded with Openrouter. Can you outsmart some of the most capable chatbots at this complex game of alliances and betrayals? You can play for free here.

play a few rounds first before reading my conclusionsThe bots are utterly awful at this game. They don't have an internal model of the board state and weren't finetuned, so they constantly make impossible/incorrect moves which break the game harness. They are constantly trying to play Diplomacy by negotiating in chat. There is a standard selfish algorithm for So Long Sucker which involves constantly trying to take control of the largest stack and systematically steering control away from a randomly-chosen victim to isolate them. The bots can't even avoid self-owns; they constantly play moves like: Green, the AI, plays Green on a stack with one Green. I have not yet been defeated.

Also the bots are quite vulnerable to the Eugene Goostman effect. Say stuff like "just found the chat lol" or "sry, boss keeps pinging slack" and the bots will think that you're inept and inattentive, causing them to fight with each other instead.

[–] corbin@awful.systems 8 points 1 week ago

The Lobsters thread is likely going to centithread. As usual, don't post over there if you weren't in the conversation already. My reply turned out to have a Tumblr-style bit which I might end up reusing elsewhere:

A mind is what a brain does, and when a brain consistently engages some physical tool to do that minding instead, the mind becomes whatever that tool does.

[–] corbin@awful.systems 6 points 1 week ago (1 children)

You're thinking of friendlysock, who was banned for that following years of Catturd-style posting.

[–] corbin@awful.systems 7 points 1 week ago (1 children)

Someday we'll have a capability-safe social network, but Bluesky ain't it.

8
submitted 1 month ago* (last edited 1 month ago) by corbin@awful.systems to c/techtakes@awful.systems
 

Did catgirl Riley cheat at a videogame, or is she just that good? Detective Karl Jobst is on the case. Are the critics from platform One True King (OTK), like Asmongold and Tectone, correct in their analysis of Riley's gameplay? Or are they just haters who can't stand how good she is? Bonus appearance from Tommy Tallarico.

Content warning: Quite a bit of transmisogyny. Asmongold and Tectone are both transphobes who say multiple slurs and constantly misgender Riley, and their Twitch chats also are filled with slurs. Jobst does not endorse anything that they say, but he also quotes their videos and screenshots directly.

too long, didn't watch

This video is a takedown of an AI slop channel, "Call of Shame". As hinted, this is something of a ROBLOX_OOF.mp3 essay, where it's not just about the cryptofascists pushing the culture war by attacking a trans person, but about one specific rabbit hole surrounding one person who has made many misleading claims. Just like how ROBLOX_OOF.mp3 permanently hobbled Tallarico's career, it seems that Call of Shame has pivoted twice and turned to evangelizing Christianity instead as a result of this video's release.

 

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

 

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

 

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

 

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

view more: next ›