This is truly nothing: Aella (credited as Aella Martin) has a Bacon number of 3
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
one horse laugh is of greater utilitarian impact across the quantum waveform of the universe than ten thousand syllogisms
Enemy of awful systems Malcolm Gladwell is a full throated transphobe
Why is it always transphobia with these kinds of people... is it because they feel racism is too risky? So they need a different outlet for hating on people based on what they believe is "science"?
the answer to all those is yes. To synthesise the bigger idea, as the saying goes: scratch a liberal, a fascist bleeds. Gladwell caters to a huge audience that is centrist, leaning somewhat liberal (though he flirts constantly with race science and straight up racism, see his writings on Korean Air). As liberals move towards fascism, so must he. Of course, it’s the most marginalised that are going to be targeted first.
As a CS student, I wonder why us and artists are always the one who are attacked the most whenever some new "insert tech stuff" comes out. And everyone's like: HOLY SHIT PROGRAMMERS AND ARTISTS ARE DEAD, without realizing that most of these things are way too crappy to actually be... good enough to replace us?
Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate
Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.
>50 min read
>”why company has perverse incentives”
>no mention of capitalism
rationalism.mpeg
Every time I see a rationalist bring up the term "Moloch" I get a little angrier at Scott Alexander.
“Moloch”, huh? What are we living in, some kind of demon-haunted world?
Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.
Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of "AI safety" as marketing copy.
So I learned about the rise of pro-Clippy sentiment in the wake of ChatGPT and that led me on a little ramble about the ELIZA effect vs. the exercise of empathy https://awful.systems/post/5495333
Great piece on previous hype waves by P. Ball
https://aeon.co/essays/no-suffering-no-death-no-limits-the-nanobots-pipe-dream
It’s sad, my “thoroughly researched” “paper” greygoo-2027 just doesn’t seem to have that viral x-factor that lands me exclusive interviews w/ the Times 🫠
Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.
Creator of NaCl publishes something even saltier.
"Am I being detained?" I scream as IETF politely asks me to stop throwing a tantrum over the concept of having moderation policy.
Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain...
https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q
DragonCon drops the ban hammer on a slop slinger. There was much rejoicing.
Btw, the vibes were absolutely marvelous this year.
Edit: a shrine was built to shame the perpetrator
https://old.reddit.com/r/dragoncon/comments/1n60s10/to_shame_that_ai_stand_in_artist_alley_people/