idea: end of year worst of ai awards. "the sloppies"
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
"Top of the Slops"
Remember how slatestarcodex argues that non-violence works better as a method of protest? Turns out the research pointing to that is a bit flawed: https://roarmag.org/essays/chenoweth-stephan-nonviolence-myth/
realising that preaching nonviolence is actually fascist propaganda is one of those consequences of getting radicalised/deprogramming from being a liberal. You can’t liberate the camps with a sit-in, for example.
Please be nonviolent while I crack your skull
Upvoted, but also consider: boycotts sometimes work. BDS is sufficiently effective that there are retaliatory laws against BDS.
Yes! It also highlights how willing the administration is to clamp down on even non-violence.
Thanks for posting. The author is provocative for sure, but I found he also wrote a similar polemic about veganism, kinda hard for me to situate it. Might fetch one of his volumes from the stacks during a slow week, probably would get my name put on a list though.
Yeah, dont see me linking to this post by Gelderloos as fully supporting his other stances, more that the whole way Scott says nonviolent movements are better isnt as supported as he says (and also shows Scott never did much activism). So more anti scott than pro Peter ;).
UIUC prof John Gallagher had a christmas post - Firm reading in an era of AI delirium - Could reading synthetic text be like eating ultra processed foods?
PauseAI Leader writes a hard take down on the EA movement: https://forum.effectivealtruism.org/posts/yoYPkFFx6qPmnGP5i/thoughts-on-my-relationship-to-ea-and-please-donate-to
They may be a doomer with some crazy beliefs about AI, but they've accurately noted EA is pretty firmly captured by Anthropic and the LLM companies and can't effectively advocate against them. And they accurately call out the false balanced style and unevenly enforced tone/decorum norms that stifle the EA and lesswrong forums. Some choice quotes:
I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type we’re pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didn’t have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but you’re actually the hardest people to convince because you’re the most invested in the broken narrative.
I don’t think talking with you on this forum with your abstruse culture and rules is the way to bring EA’s heart back to the right place
You’ve lost the plot, you’re tedious to deal with, and the ROI on talking to you just isn’t there.
I think you’re using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves.
Case in point: EAs wouldn’t come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!
The author doesn't really acknowledge how the problems were always there from the very founding of EA, but at least they see the problems as they are now. But if they succeeded, maybe they would help slow the waves of slop and capital replacing workers with non-functioning LLM agents, so I wish them the best.
I will never forgive Rob Pike for the creation of the shittiest widely adopted programming language since C++, but I very much enjoy this recent thread where he rages about Anthropic.
I didn't realize this was part of the rationalist-originated "AI Village" project. See https://sage-future.org/ and https://theaidigest.org/village. Involved members and advisors include Eli Lifland and Daniel Kokotajlo of "AI 2027" infamy.
Other classic Rob Pike moments include spamming Usenet with Markov-chain bots and quoting the Bible to justify not highlighting Go syntax. Watching him have a Biblical meltdown over somebody emailing him generated text is incredibly funny in this context.
for the creation of the shittiest widely adopted programming language since C++
Hey! JavaScript is objectively worse, thank you very much
I strongly disagree. I would much rather write a JavaScript or C++ application than ever touch Rob Pikes abortion of a language. Thankfully, other options are available.
Meh, at least go has a standard library that's useful for... anything, really
JavaScript isn't even the worst Brendan Eich misdeed!
Okay but that's more a Brensan Eichs anti-achievement than JavaScript being any good
Impressive side fact: looks like they sent him an unsolicited email generated by a fucking llm? An impressive failure to read the fucking room.
Not like that exact bullshit has been attempted on the community at large by other shitheads last year, but then originality was never the clanker wankers’ strength
Pike's blow-up has made it to the red site, and its brought promptfondlers out the woodwork.
(Skyview proxy, for anyone without a Bluesky account)
Digressing: The irony is that it's a language with one of the best standard libraries out there. Wanna run a http reverse proxy with TLS cross compiled for a different os? No problem!
Many times I used it only because of that despite it being a worse language.
Its standard crypto libraries are also second to none.
I figure I might as well link the latest random positivity thread here in case anyone following the Stubsack had missed it.
lol, Oliver Habryka at Lightcone is sending out begging emails, i found it in my spam folder
(This email is going out to approximately everyone who has ever had an account on LessWrong. Don't worry, we will send an email like this at most once a year, and you can permanently unsubscribe from all LessWrong emails here)
declared Lightcone Enemy #1 thanks you for your attention in sending me this missive, Mr Habryka
In 2024, FTX sued us to claw back their donations, and around the same time Open Philanthropy's biggest donor asked them to exit our funding area. We almost went bankrupt.
yes that's because you first tried ignoring FTX instead of talking to them and cutting a deal
that second part means Dustin Moskovitz (the $ behind OpenPhil) is sick of Habryka's shit too
If you want to learn more, I wrote a 13,000-word retrospective over on LessWrong.
no no that's fine thanks
We need to raise $2M this year to continue our operations without major cuts, and at least $1.4M to avoid shutting down. We have so far raised ~$720k.
and you can’t even tap into Moskovitz any more? wow sucks dude. guess you’re just not that effective as altruism goes
And to everyone who donated last year: Thank you so much. I do think humanity's future would be in a non-trivially worse position if we had shut down.
you run an overpriced web hosting company and run conferences for race scientists. my bayesian intuition tells me humanity will probably be fine, or perhaps better off.
you run an overpriced web hosting company and run conferences for race scientists. my bayesian intuition tells me humanity will probably be fine, or perhaps better off.
Someone in the comments calls them out: "if owning a $16 million conference centre is critical for the Movement, why did you tell us that you were not responsible for all the racist speakers at Manifest or Sam 'AI-go-vroom' Altman at another event because its just a space you rent out?"
OMG the enemies list has Sam Altman under "the people who I think have most actively tried to destroy it (LessWrong/the Rationalist movement)"
"Would you like to know more?"
"Nah, I'm cool."
More like
Would you like to know more
I mean, sure
Here's a 13,000-word retrospective
Ah, nah fam
Ohoho, a beautiful lw begging post on this of all days?

In the future, I'm going to add "at scale" to the end of all my fortune cookies.
you will experience police brutality very soon, at scale
I am only mildly concerned that rapidly scaling this particular posting gimmick will cause our usually benevolent and forebearing mods to become fed up at scale
I'm a lot more afraid of the recursive effect of this joke. We could have at scale at scale.
Don’t worry, I’m sure Lemmy is perfectly capable of tail call optimization at scale
is that the superexponential growth that lw tried to warn us about?
Yes! at scale at scale at scale at scale
odium symposium christmas bonus episode: we watched and reviewed Sean Hannity's straight-to-Rumble 2023 Christmas comedy "Jingle Smells."
I’ve subscribed, it scratches the itch between waiting for episodes of if books could kill!