I was wondering about the origins of sneerclub and discovered something kinda fun: “r/SneerClub” pre-dates “r/BlogSnark”, the first example of a “snark subreddit” listed on the wiki page! The vibe of snark subreddits seem to be very different to that of sneerclub etc. (read: toxic!!!) but I wouldn’t know the specifics as I’m not a snark participant.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
New trick for detecting bots, ask them for the seahorse emoji. found via
The article claims that Google didn't "fall for the same trap" but that's not correct, all this garbage is indeterministic so the author just got "lucky".
It's like saying "four out of five coin-flips claimed that an eagle was the first US president" -- just because the fifth landed on heads and showed George Washington doesn't mean it's any different than the rest.
But here I'm preaching to the choir.
Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.
This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.
Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though
Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.
Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn't listen to it yet). There's also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.
Siskind did a review too, basically gives it the 'their hearts in the right place but... [read AI2027 instead]' treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504
Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.
Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.
This century deserves a better class of thought-criminal
assuming that nuclear nonproliferation is gonna hold up indefinitely for any reason is some real fukuyama's end of history shit
let alone "because it's Rational™ thing to do", it's only in rational interest of already-nuclear states to keep things this way. couple of states that could make a good point for having nuclear arsenal and having capability to manufacture it are effectively dissuaded from this by american diplomacy (mostly nuclear umbrella for allies and sanctions or fucking with their facilities for enemies). with demented pedo in chief and his idiot underlings trying their hardest to undo this all, i really wouldn't be surprised if, say, south korea decides to get nuclear
Yud: "That's not going to asymptote to a great final answer if you just run them for longer."
Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud's weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It's like a cult signifier.
They think Yud is a world-historical intellect (I’ve seen claims on twitter he has a iq of 190 - yeah really) and by emulation a little of the old smartness can rub off on them.
It's also inherently-begging-the-question-silly, like it assumes that the Ideal of Alignment™, can never be reached but only approached. (I verb nouns quite often so I have to be more picky at what I get annoyed at)
Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it
This is his claim about everything, including how we got gay rights. Real if all you have is a hammer stuff.
Trying to figure out if that Siskind take comes from a) a lack of criticality and/or the ability to read subtext or b) some ideological agenda to erase the role of violence (threats of violence are also violence!) in change happening, or both
When you are running a con like crypto or chatbot companies, it helps to know someone who is utterly naive and can't stop talking about whatever line you feed him. If this were the middle ages Kevin Roose would have an excellent collection of pigges bones and scraps of linen that the nice friar promised were relics of St Margaret of Antioch.
Education report calling for ethical AI use contains over 15 fake sources
womp, and wait for it, womp
The report claims its about ethical AI use, but all I see is evidence that AI is inherently unethical, and an argument for banning AI from education forever.
New premium column from Ed Zitron, digging into OpenAI and Oracle's deal.
The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.
I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.
Just to remind everyone: the market can stay irrational longer than you can stay solvent!
OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment.
Another possibility is that Altman's bought into his own hype, and genuinely believes OpenAI will achieve AGI before the money runs out. Considering the tech press has been uncritically hyping up AI in general, and Sammy Boy himself has publicly fawned over "metafiction" "written" by an in-house text extruder, its a possibility I'm not gonna discount.
nah. Sam's a monorail salesman who knows how to say the AI doomer buzzwords.
Whichever one of you did https://alignmentalignment.ai/caaac/jobs, well done, and many lols.
CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months.
Alright, I can pretend to believe that, go on…
We offer competitive salaries and generous benefits, including no performance management because we have no way to assess whether the work you do is at all useful.
Incredible. I hope I get the job!
Make sure to click the "Apply Now" button at the bottom for a special treat.
The Wall Street Journal came out with a story on "conspiracy physics", noting Eric Weinstein and Sabine Hossenfelder as examples. Sadly, one of their quoted voices of sanity is Scott Aaronson, baking-soda volcano of genocide apologism.
Somehow, ~~Palpatine returned~~ Scott came off as a voice of reason