TinyTimmyTokyo

joined 2 years ago

When this was first posted I too was curious about the book series. It appears that nearly every book in the series is authored by academics affiliated with Indian universities. Modi's government has promoted and invested heavily in AI.

I call bullshit on Daniel K. That backtracking is so obviously ex-post-facto cover-your-ass woopsie-doopsie. Expect more of it as we get closer to whatever new "median" he has suddenly claimed. It's going to be fun to watch.

[–] TinyTimmyTokyo@awful.systems 24 points 3 days ago

I have no doubt that a chatbot would be just as effective at doing Liuson's job, if not moreso. Not because chatbots are good, but because Liuson is so bad at her job.

That thread is wild. Nate proposes techniques to get his kooky beliefs taken more seriously. Others point out that those very same techniques counterproductively pushed people to into the e/acc camp. Nate deletes those other people's comments. How rationalist of him!

 

Nate Soares and Big Yud have a book coming out. It's called "If Anyone Builds It, Everyone Dies". From the names of the authors and the title of the book, you already know everything you need to know about its contents without having to read it. (In fact, given the signature prolixity of the rationalists, you can be sure that it says in 50,000 words what could just as easily have been said in 20.)

In this LessWrong post, Nate identifies the real reason the rationalists have been unsuccessful at convincing people in power to take the idea of existential risk seriously. The rationalists simply don't speak with enough conviction. They hide the strength of their beliefs. They aren't bold enough.

As if rationalists have ever been shy about stating their kooky beliefs.

But more importantly, buy his book. Buy so many copies of the book that it shows up on all the best-seller lists. Buy so many copies that he gets invited to speak on fancy talk shows that will sell even more books. Basically, make him famous. Make him rich. Make him a household name. Only then can we make sure that the AI god doesn't kill us all.

Nice racket.

[–] TinyTimmyTokyo@awful.systems 18 points 4 days ago* (last edited 4 days ago) (2 children)

People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues -- we're all more vulnerable to mental illness than we'd like to think.

Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of "self-experimentation" that exposes us to psychological risks we aren't even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

[–] TinyTimmyTokyo@awful.systems 9 points 2 months ago (1 children)

ChatGPT tells prompter that he's brilliant for his literal "shit on a stick" business plan.

[–] TinyTimmyTokyo@awful.systems 5 points 2 months ago

Not surprised to find Sabine in the comments. She's been totally infected by the YouTube algorithm and captured by her new culture-war-mongering audience. Kinda sad, really.

[–] TinyTimmyTokyo@awful.systems 1 points 2 months ago

We should be trying to stop this from coming to pass with the urgency we would try to stop a killer asteroid from striking Earth. Why aren’t we?

Wait, what are we trying to stop from coming to pass? Superintelligent AIs? Either I'm missing his point, or he really agrees with the doomers that LLMs are on their way to becoming "superintelligent".

[–] TinyTimmyTokyo@awful.systems 10 points 2 months ago (4 children)

Why do AI company logos look like buttholes?

(Blog post written by a crypto-turned-AI bro, but the observation is amusing.)

[–] TinyTimmyTokyo@awful.systems 15 points 2 months ago

Maybe Elon can install Grok as the copilot of his private jets.

[–] TinyTimmyTokyo@awful.systems 6 points 2 months ago

Check out the by-line. Big surprise!

[–] TinyTimmyTokyo@awful.systems 15 points 2 months ago

"Thought process"

"Intuitively"

"Figured out"

"Thought path"

I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.

 

The tech bro hive mind on HN is furiously flagging (i.e., voting into invisibility) any submissions dealing with Tesla, Elon Musk or the kafkaesque US immigration detention situation. Add "/active" to the URL to see.

The site's moderator says it's fine because users are "tired of the repetition". Repetition of what exactly? Attempts to get through the censorship wall?

 

Sneerclubbers may recall a recent encounter with "Tracing Woodgrains", née Jack Despain Zhou, the rationalist-infatuated former producer and researcher for "Blocked and Reported", a podcast featuring prominent transphobes Jesse Singal and Katie Herzog.

It turns out he's started a new venture: a "think-tank" called the "Center for Educational Progress." What's this think-tank's focus? Introducing eugenics into educational policy. Of couse they don't put it in those exact words, but that's the goal. The co-founder of the venture is Lillian Tara, former executive director of Pronatalist.org, the outfit run by creepy Harry Potter look-a-likes (and moderately frequent topic in this forum) Simone and Malcolm Collins. According to the anti-racist activist group Hope Not Hate:

The Collinses enlisted Lillian Tara, a pronatalist graduate student at Harvard University. During a call with our undercover reporter, Tara referred three times to her work with the Collinses as eugenics. “I don’t care if you call me a eugenicist,” she said.

Naturally, the CEP is concerned about IQ and want to ensure that mentally superior (read white) individuals don't have their hereditarily-deserved resources unfairly allocated to the poors and the stupids. They have a reading list on the substack, which includes people like Arthur Jensen and LessWrong IQ-fetishist Gwern.

So why are Trace and Lillian doing this now? I suppose they're striking while the iron is hot, probably hoping to get some sweet sweet Thiel-bucks as Elon and his goon-squad do their very best to gut public education.

And more proof for the aphorism: "Scratch a rationalist, find a racist".

 

In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.

His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.

People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[...]

Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful.

And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming.

Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

 

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

 

Maybe she was there to give Moldbug some relationship advice.

 

Pass the popcorn, please.

(nitter link)

 

Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

 

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

 

Excerpt:

Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

He's also a big eugenics supporter:

“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

(Reminds me a lot of the things Scott Siskind has written in the past.)

Some people who have been friendly with Hanania:

  • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
  • Hamish McKenzie, CEO of Substack
  • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
  • Tyler Cowen, libertarian econ blogger and George Mason University prof
  • J.D. Vance, US Senator from Ohio
  • Steve Sailer, race (pseudo)science promoter and all-around bigot
  • Amy Wax, racist law professor at UPenn.
  • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

view more: next ›