lurker

joined 3 weeks ago
[–] lurker@awful.systems 5 points 8 hours ago* (last edited 8 hours ago)

the AI safety crowd cuts Anthropic way too much slack. Oh, they’re not running CSAM-generating MechaHitler? Oh, they’re not collaborating with the US government to recreate 1984? I’m so proud of them for doing the bare minimum. They still took donations from the UAE and Qatar (something Dario Amodei himself admitted was going to hurt a lot of people, but he took the donations anyways because “they couldn’t miss out on all those valuations”), they still downloaded hundreds of pirated content to train their chatbot. They’re still doing shady shit, don’t let them off the hook because they’re slightly less evil than the competition

[–] lurker@awful.systems 2 points 2 days ago

Damn, she went in on Yud (not that he doesn’t deserve it)

I also found this article about the doc in the replies of that post https://buttondown.com/maiht3k/archive/a-tale-of-two-ai-documentaries/ and it’s a very interesting read

[–] lurker@awful.systems 2 points 3 days ago (1 children)

Update: the screenshot is unfortunately not LLM generated, found the full version on Reddit SneerClub https://web4.ai/

[–] lurker@awful.systems 3 points 3 days ago* (last edited 3 days ago)

New “AI is not a bubble” video just dropped https://youtu.be/wDBy2bUICQY a lot of skeptical comments pointing out the flaws in this argument while the creator tries to defend themselves with mostly mediocre lines

[–] lurker@awful.systems 3 points 3 days ago* (last edited 3 days ago)

I took a deeper look into the documentary, and it does go into both the pessimist and optimist perspectives, so their inclusion makes more sense. and yeah, I was trying to get at how they're skeptical of the TESCREAL stuff and of current LLM capabilities

[–] lurker@awful.systems 2 points 3 days ago* (last edited 3 days ago) (1 children)

I poked around the IMDB page, and there are reviews! currently it's sitting at a 8.5/10 with 31 ratings (though no written reviews it seems like) the metacritic score is a 51/100 with 4 reviews and there are 4 external reviews

[–] lurker@awful.systems 14 points 3 days ago

HOLY SHIT LMFAOOOOO

[–] lurker@awful.systems 3 points 3 days ago* (last edited 3 days ago) (2 children)

Sam Altman and the other CEOS being there is such a joke “this technology is so dangerous guys! of course I’m gonna keep blocking regulation for it, I need to make money after all!” Also, I’m shocked Emily Bender and Timmit Gebru are there, aren’t they AI skeptics?

[–] lurker@awful.systems 2 points 3 days ago

Surprised it’s a term they stole and not one they made up. But yeah the whole idea of “AGI will solve all our problems” is just silly

[–] lurker@awful.systems 2 points 3 days ago

huh, you’re right. usually this channel provides a source for the things they share, but this time there’s nothing.

[–] lurker@awful.systems 6 points 4 days ago (1 children)

the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander

 

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

 

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

 

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›