CinnasVerses

joined 5 months ago
[–] CinnasVerses@awful.systems 11 points 13 hours ago* (last edited 13 hours ago)

Also likely to be concentrated in the USA, whose government is helpfully screaming at the rest of the world "disconnect your economies and your IT systems from ust!" Most of us are busy doing that as fast as possible although it takes a while to get everyone on board.

its a very useful skill when reading the internet to learn to see when "in the US and UK" or "in SoCal and London" should be appended to a sentence. Once you see it you can't unsee it.

[–] CinnasVerses@awful.systems 9 points 20 hours ago

Since it comes up on one in 216 characters with random stats rolled on 3d6, shouldn't INT 18 be three standard deviations above the norm?

Its a surprisingly modest claim about Marty Stu (Cheeliax with 20 million people, most of them poor given stated infant mortality, has 20 equally bright women available for breeding duty).

Some LIbertarians avoid marriage or common-law relationships, not sure if Yud has expressed an opinion on the topic though.

[–] CinnasVerses@awful.systems 6 points 1 day ago* (last edited 1 day ago) (2 children)

Yud was married to a woman in 2019 and mentions other partners and playmates. He said he met her in 2013. The post from 2013 is an infohazard.

[–] CinnasVerses@awful.systems 7 points 1 day ago* (last edited 1 day ago) (4 children)

considering his confidence in an AI apocalypse I heavily doubt it

He argues that the way to prevent the AI apocalypse is to breed superbabies from high-IQ stock, and a major plot point of Project Lawful is that the hero wants to prove himself worthy of having more children than average and nation he isekais himself to agrees but wants any future children to use their genes for evil.

[–] CinnasVerses@awful.systems 11 points 1 day ago (7 children)

I wonder if Yud has biological children? He is mercifully discreet about his home life even if he shares his kinks like he has a five-book contract with Baen Books.

[–] CinnasVerses@awful.systems 11 points 1 day ago* (last edited 1 day ago) (2 children)

I am starting to suspect that Yud never learned to compartmentalize knowledge and put it in different epistemic categories. To him, AD&D alignments, Large Language Models, and race pseudoscience are all nerdy ideas he read about on the Internet, all equally real and true. He does not seem to file a carpenter telling him how to frame a roof, a rabbi teaching theology, and a random twitter account in different categories like most of us would.

[–] CinnasVerses@awful.systems 14 points 1 day ago (3 children)

Its Project Lawful, his D&D novel about eugenics and BDSM cowritten on a forum with many of our dear friends

[–] CinnasVerses@awful.systems 13 points 2 days ago (3 children)

A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowsky's AI Risk Estimates (ie. "Yud has been bad at predictions in the past so we should be skeptical of his predictions today")

[–] CinnasVerses@awful.systems 4 points 5 days ago* (last edited 5 days ago)

A Christopher DiCarlo (cwdicarlo on LessWrong) got AI doomerism into Macleans magazine in Canada. He seems to have got into AI doomerism in the 1990s but hung out being an academic and kept his manifestos to himself until recently. He claims to have clashed with First Nations creationists back in 2005 when he said "we are all African." His book is called Building a God: The Ethics of Artificial Intelligence and the Race to Control It.

There must be many such cases who read the Extropians in the 1990s and 2000s and neither filed them with fiction not turned them into a career.

[–] CinnasVerses@awful.systems 4 points 6 days ago

And its a fable about having to petition a mercurial and brutal authority for resources you need to live! A lesson that anyone who wants to make their living vibe-coding should ponder.

[–] CinnasVerses@awful.systems 9 points 1 week ago* (last edited 1 week ago) (1 children)

Economist John Quiggin posts a critique of William MacAskill's type of utilitarianism with confusing logic, has to retract it when a quote with chapter and verse in his main text does not exist:

Even though I have a clear memory of locating the third quotation in the Gutenberg edition, I can’t find it now. So, I;ve edited the post to deleted it. Apologies for this. I’m assuming the quote I found was some kind of AI confabulation, and that I slipped up on the check. I will need to double check more carefully in future.

(quote is from the comments I have not corrected or added sic)

He says he is writing a book against pro-natalism.

 

Its almost the end of the year so most US nonprofits which want to remain nonprofits have filed Form 990 for 2024 including some run by our dear friends. This is a mandatory financial report.

  • Lightcone Infrastructure is here. They operate LessWrong and the Lighthaven campus in Berkeley but list no physical assets; someone on Reddit says that they let fellow travelers like Scott Alexander use their old rented office for free. "We are a registered 501(c)3 and are IMO the best bet you have for converting money into good futures for humanity." They also published a book and website with common-sense, data-based advice for Democratic Party leaders called Deciding to Win which I am sure fills a gap in the literature. Edit: their November 2024 call for donationswhich talks how they spend $16.5m on real estate and $6m on renovations then saw donations collapse is here, an analysis is here
  • CFAR is here. They seem to own the campus in Berkeley but it is encumbered with a mortgage ("Land, buildings, and equipment ... less depreciation; $22,026,042 ... Secured mortgages and notes payable, $20,848,988"). I don't know what else they do since they stopped teaching rationality workshops in 2016 or so and pivoted to worrying about building Colossus. They have nine employees with salaries from $112k to $340k plus a president paid $23k/year
  • MIRI is here. They pay Yud ($599,970 in 2024!) and after failing to publish much research on how to build Friend Computer they pivoted to arguing that Friend Computer might not be our friend. Edit: they had about $16 million in mostly financial assets (cash, investments, etc.) at end of year but spent $6.5m against $1.5m of revenue in 2024. They received $25 million in 2021 and ever since they have been consuming those funds rather than investing them and living off the interest.
  • BEMC Foundation is here. This husband-and-wife organization gives about $2 million/year each to Vox Future Perfect and GiveWell from an initial $38m in capital (so they can keep giving for decades without adding more capital). Edit: The size of the donations to Future Perfect and GiveWell swing from year to year so neither can count on the money, and they gave out $6.4m in 2024 which is not sustainable.
  • The Clear Fund (GiveWell) is here. They have the biggest wad of cash and the highest cashflow.
  • Edit: Open Philanthropy (now Coefficient Giving) is here (they have two sister organizations). David Gerard says they are mainly a way for Dustin Moskevitz the co-founder of Facebook to organize donations, like the Gates, Carnegie, and Rockefeller foundations. They used to fund Lightcone.
  • Edit: Animal Charity Evaluators is here. They have funded Vox Future Perfect (in 2020-2021) and the longtermist kind of animal welfare ("if humans eating pigs is bad, isn't whales eating krill worse?")
  • Edit: Survival and Flourishing Fund does not seem to be a charity. Whereas a Lightcone staffer says that SFF funds Lightcone, SFF say that they just connect applicants to donors and evaluate grant applications. So who exactly is providing the money? Sometimes its Jaan Tallinn of Skype and Kazaa.
  • Centre for Effective Altruism is mostly British but has a US wing since March 2025 https://projects.propublica.org/nonprofits/organizations/333737390
  • Edit: Giving What We Can seems like a mainstream "bednets and deworming pills" type of charity
  • Edit: Givedirectly Inc is an excellent idea in principle (give money to poor people overseas and let them figure out how best to use it) but their auditor flagged them for Material noncompliance and Material weakness in internal controls. The mistakes don't seem sinister (they classified $39 million of donations as conditional rather than unconditional- ie. with more restrictions than they actually had). GiveDirectly, Give What We Can, and GiveWell are all much better funded than the core LessWrong organizations.

Since CFAR seem to own Lighthaven, its curious that Lightcone head Oliver Habryka threatens to sell it if Lightcone shut down. One might almost imagine that boundaries between all these organizations are not as clear as the org charts make it seem. SFGate says that it cost $16.5 million plus renovations:

Who are these owners? The property belongs to a limited liability company called Lightcone Rose Garden, which appears to be a stand-in for the nonprofit Center for Applied Rationality and its project, Lightcone Infrastructure. Both of these organizations list the address, 2740 Telegraph Ave., as their home on public filings. They’ve renovated the inn, named it Lighthaven, and now use it to host events, often related to the organizations’ work in cognitive science, artificial intelligence safety and “longtermism.”

Habryka was boasting about the campus in 2024 and said that Lightcone budgeted $6.25 million on renovating the campus that year. It also seems odd for a nonprofit to spend money renovating a property that belongs to another nonprofit.

On LessWrong Habryka also mentions "a property we (Lightcone) own right next to Lighthaven, which is worth around $1M" and which they could use as collateral for a loan. Lightcone's 2024 paperwork listed the only assets as cash and accounts receivable. So either they are passing around assets like the last plastic cup at a frat party, or they bought this recently while the dispute with the trustees was ongoing, or Habryka does not know what his organization actually owns.

The California end seems to be burning money, as many movements with apocalyptic messages and inexperienced managers do. Revenue was significantly less than expenses and assets of CFAR are close to liabilities. CFAR/Lightcone do not have the $4.9 million liquid assets which the FTX trustees want back and claim their escrow company lost another $1 million of FTX's money.

 

People connected to LessWrong and the Bay Area surveillance industry often cite David Chapman's "Geeks, Mops, and Sociopaths in Subculture Evolution" to understand why their subcultures keep getting taken over by jerks. Chapman is a Buddhist mystic who seems rationalist-curious. Some people use the term postrationalist.

Have you noticed that Chapman presents the founders of nerdy subcultures as innocent nerds being pushed around by the mean suits? But today we know that the founders of Longtermism and LessWrong all had ulterior motives: Scott Alexander and Nick Bostrom were into race pseudoscience, and Yudkowsky had his kinks (and was also into eugenics and Libertarianism). HPMOR teaches that intelligence is the measure of human worth, and the use of intelligence is to manipulate people. Mollie Gleiberman makes a strong argument that "bednet" effective altruism with short-term measurable goals was always meant as an outer doctrine to prepare people to hear the inner doctrine about how building God and expanding across the Universe would be the most effective altruism of all. And there were all the issues within LessWrong and Effective Altruism around substance use, abuse of underpaid employees, and bosses who felt entitled to hit on subordinates. A '60s rocker might have been cheated by his record label, but that does not get him off the hook for crashing a car while high on nose candy and deep inside a groupie.

I don't know whether Chapman was naive or creating a smokescreen. Had he ever met the thinkers he admired in person?

 

Form 990 for these organizations mentions many names I am not familiar with such as Tyler Emerson. Many people in these spaces have romantic or housing partnerships with each other, and many attend meetups and cons together. A MIRI staffer claims that Peter Thiel funded them from 2005 to 2009, we now know when Jeffrey Epstein donated. Publishing such a thing is not very nice since these are living persons frequently accused of questionable behavior which never goes to court (and some may have left the movement), but does a concise list of dates, places, and known connections exist?

Maybe that social graph would be more of a dot. So many of these people date each other and serve on each other's boards and live in the SF Bay Area, Austin TX, the NYC area, or Oxford, England. On the enshittified site people talk about their Twitter and Tumblr connections.

 

We often mix up two bloggers named Scott. One of Jeffrey Epstein's victims says that she was abused by a white-haired psychology professor or Harvard professor named Stephen. In 2020, Vice observed that two Harvard faculty members with known ties to Epstein fit that description (a Steven and a Stephen). The older of the two taught the younger. The younger denies that he met or had sex with the victim. What kind of workplace has two people who can be reasonably suspected of an act like that?

I am being very careful about talking about this.

 

An opposition between altruism and selfishness seems important to Yud. 23-year-old Yud said "I was pretty much entirely altruistic in terms of raw motivations" and his Pathfinder fic has a whole theology of selfishness. His protagonists have a deep longing to be world-historical figures and be admired by the world. Dreams of controlling and manipulating people to get what you want are woven into his community like mould spores in a condemned building.

Has anyone unpicked this? Is talking about selfishness and altrusm common in LessWrong like pretending to use Bayesian statistics?

 

I used to think that psychiatry-blogging was Scott Alexander's most useful/least harmful writing, because its his profession and an underserved topic. But he has his agenda to preach race pseudoscience and 1920s-type eugenics, and he has written in some ethical grey areas like stating a named friend's diagnosis and desired course of treatment. He is in a community where many people tell themselves that their substance use is medicinal and want proscriptions. Someone on SneerClub thinks he mixed up psychosis and schizophrenia in a recent post.

If you are in a registered profession like psychiatry, it can be dangerous to casually comment on your colleagues. Regardless, has anyone with relevant qualifications ever commented on his psychiatry blogging and whether it is a good representation of the state of knowledge?

33
submitted 4 months ago* (last edited 4 months ago) by CinnasVerses@awful.systems to c/sneerclub@awful.systems
 

Bad people who spend too long on social media call normies NPCs as in video-game NPCs who follow a closed behavioural loop. Wikipedia says this slur was popular with the Twitter far right in October 2018. Two years before that, Maciej Ceglowski warned:

I've even seen people in the so-called rationalist community refer to people who they don't think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

Sometime in 2016, an anonymous coward on 4Chan wrote:

I have a theory that there are only a fixed quantity of souls on planet Earth that cycle continuously through reincarnation. However, since the human growth rate is so severe, the soulless extra walking flesh piles around us are NPC’s (sic), or ultimate normalfags, who autonomously follow group think and social trends in order to appear convincingly human.

Kotaku says that this post was rediscovered by the far right in 2018.

Scott Alexander's novel Unsong has an angel tell a human character that there was a shortage of divine light for creating souls so "I THOUGHT I WOULD SOLVE THE MORAL CRISIS AND THE RESOURCE ALLOCATION PROBLEM SIMULTANEOUSLY BY REMOVING THE SOULS FROM PEOPLE IN NORTHEAST AFRICA SO THEY STOPPED HAVING CONSCIOUS EXPERIENCES." He posted that chapter in August 2016 (unsongbook.com). Was he reading or posting on 4chan?

Did any posts on LessWrong use this insult before August 2016?

Edit: In HPMOR by Eliezer Yudkowsky (written in 2009 and 2010), rationalist Harry Potter calls people who don't do what he tells them NPCs. I don't think Yud's Harry says they have no souls but he has contempt for them.

view more: next ›