lurker

joined 3 months ago
[–] lurker@awful.systems 5 points 8 hours ago

The idea of “the exponential curve goes up forever” has always been silly and an idea rooted in capitalism for me (“no bro you don’t get it we’re gonna get infinite money forever”). Limited resources exist, and people are already very fed up with the ludicrous amounts of water and electricity data centres take up. Making bigger models that need to run for longer is also probably going to take an exponential amount of resources (and also make people hate you more).

[–] lurker@awful.systems 8 points 1 day ago (1 children)

Jeez, I hope you're okay. Do you have any thoughts on the uptick in rationalism and its influence (thanks to AI)?

[–] lurker@awful.systems 7 points 3 days ago

Nick Bostrom jumpscare with a funny sneer

These already head-scratching lines hit different when you remember that Bostrom believes it’s likely that we’re already living inside a computer simulation — in his head canon, do all those levels of simulated ancestors develop their own superintelligence, and what does that have to do with the new simulations they feel compelled to build? If AI wipes out humankind, does it build its own simulation? If so, is it simulating its human ancestors, or its creation by humankind? Heck, if our entire world is simulated, are we AI? We’ll leave it up to readers to take another bong hit while they try to make sense of it all.

[–] lurker@awful.systems 16 points 4 days ago (1 children)

Graduation Speaker Shocked When She’s Loudly Booed by Students for Saying AI Is the Future

I don't know man maybe shoving AI into every conceivable crack and crevice and insisting people shut up and deal with it has made people upset. could be wrong tho

[–] lurker@awful.systems 2 points 4 days ago

Yud says so much, and its often so confusing, that I think a lot of his followers don’t know his main messages.

This is very late to respond but what I’ve noticed is that a when people in rationalist spaces respond to Yud, they often say “my interpretation of this is..” and things along similar lines, which always struck me as weird

[–] lurker@awful.systems 6 points 5 days ago* (last edited 5 days ago)

The METR graph has gone up again to my fascination somehow the gap between 50% and 80% has gotten even longer (15 hour difference) the CI is also still big (47 hours)

[–] lurker@awful.systems 7 points 6 days ago (1 children)

this seems like a great time to bring back AI disagreements by Brian Merchant where a rationalist AI convention spends more time arguing about AI takeover scenarios then they do discussing plans to actually stop AI and implement anti-AI policies

[–] lurker@awful.systems 10 points 1 week ago (3 children)

that image of Yud made me laugh out loud

[–] lurker@awful.systems 4 points 1 week ago (1 children)

I saw the emails where Musk and Altman treated Hassabis like some great evil, but I didn’t know a Scott blogpost was involved

[–] lurker@awful.systems 8 points 1 week ago* (last edited 1 week ago) (3 children)

Under Threat of Perjury, OpenAI’s Former CTO Is Admitting Some Very Interesting Stuff About Sam Altman the interesting stuff in question is that Sam is a massive liar, which we all already know, but hey more proof can't hurt

[–] lurker@awful.systems 4 points 1 week ago

Oh shit did LessWrongers actually cut his fibre? Hope he's all good now and they get a fix out in the next thousand years

 

Originally posted in the Stubsack, but decided to make it its own post because why not

 

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

 

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

 

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›