scruiser

joined 2 years ago
[–] scruiser@awful.systems 7 points 1 week ago* (last edited 1 week ago) (12 children)

Eliezer joins the trend of condemning "political" violence with confidence on the far end of the dunning-kruger curve: https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

I've already mocked this attitude down thread and in the previous weekly thread, so I'll try to keep my mockery to a few highlights...

He's admitting nuke the data centers is in fact violence!

It would be beneath my dignity as a childhood reader of Heinlein and Orwell to pretend that this is not an invocation of force.

But then drawing a special case around it.

But it's the sort of force that's meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.

I don't think Eliezer has checked the news if he think the US government carries out violence in predictable or fair or avoidable ways! Venezuela! (It wasn't fair before Trump, or avoidable if you didn't want to bend over for the interest of US capital, but it is blatantly obvious under Trump) The entire lead up to Iran consisted of ripping up Obama's attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as "Lawful" in Eliezers book? ~~If he kept up with the DnD/Pathfinder source material, he would know Achaekek's assassins are actually Lawful Evil~~

The ASI problem is not like this. If you shut down 5% of AI research today, humanity does not experience 5% fewer casualties. We end up 100% dead after slightly more time.

His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And they think slowing down AGI (whether through violence or other means) might buy time for another sort of solution that is more permanent (like the idea of "solve alignment" Eliezer originally promised them). Lots of lesswrong posts regularly speculate on how to slow down the AI race and how to make use of the time they have, this isn't even outside the normal window of lesswrong discourse!

Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals

Sources cited: 0

One of the comments also pisses me off:

Which reminds me about another point: I suspect that "bomb data centers" meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory.

"Drone strike the data centers even if starts nuclear war" is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.

[–] scruiser@awful.systems 5 points 1 week ago* (last edited 1 week ago) (3 children)

how do we ensure that no-one builds it?

Eliezer made a lesswrong post yesterday where he explains that since anyone could build it, lone acts of violence are obviously ineffective and the only solution is the right and proper ("Lawful" as he calls it, because he has been stuck on DnD since writing Planecrash) state violence which can enforce a worldwide ban (which you may recall Eliezer has put at the absurdly low 8 2024 GPUs).

[–] scruiser@awful.systems 1 points 1 week ago* (last edited 1 week ago)

or is there more going on?

One idea I've read about (heavily developed by Ed Zitron, but also a few other news sources and commentators have put it forward) is that SaaS (Software as a Service) businesses were heavily over invested in expectation of basically infinite growth over the past decade. SaaS growth was "exponential" in its early days, but then various needs of the market were basically saturated, so SaaS companies squeezed more growth out cutting cuts or upping how much they charged, and now it is finally catching up to them.

The AI hype means almost everyone tries to interpret everything the lines of AI causing it. The recent price correction in many SaaS companies was (mis)interpreted as the threat of vibe-coded replacements forcing them to cut costs. The SaaS companies trying to cut costs and going through layoffs is being misinterpreted as AI successfully replacing junior devs.

[–] scruiser@awful.systems 6 points 1 week ago (6 children)

The Zvi post really pisses me off for continuing to normalize Eliezer's comments (in a way that misrepresents the problems with them).

This happened quite a bit around Eliezer’s op-ed in Time in particular, usually in highly bad faith, and this continues even now, equating calls for government to enforce rules to threats of violence, and there are a number of other past cases with similar sets of facts.

Eliezer called for the government to drone strike data centers, even of foreign governments not signatories to international agreements, and even if doing so risked starting nuclear war.

Pacifism is at least a consistent position, but instead rationalists like Zvi want to simultaneously disown the radical actions, but legitimizes the US's shit show of a foreign policy.

Another thing that pisses me off is the ahistorical claim by rationalist that such actions are ineffective and unlikely to succeed. Asymmetric warfare and terrorist tactics have obtained success many times in history! The kkk successfully used terrorism to repress a population for a century. The black panthers got gun control passed in California and put pressure on political leaders to accept the more peaceful branch of the civil rights movement. The IRA got the Good Friday agreement. The US revolution! All the empires that have withdrawn from Afghanistan!

Overall though... I guess this is a case of two wrongs making a sorta right. They are dangerously wrong about AI doom, but at least they are also wrong about direct action and so usually won't take the actions implied by their beliefs. (But they are still, completely predictably, inspiring stochastic terrorists).

[–] scruiser@awful.systems 4 points 1 week ago (1 children)

You've reminded me about the whole edifice of Qanon lore where they would try to combine 4chan (and later even sketchier sites like 8chan) hints with whatever Trump was posting at the moment to decode secret knowledge about stuff like when the military tribunals executing all the democrats would be.

Anyway, in Eliezer's case, I kind of get the feeling the lesswrong rationalists have somewhat moved on from him? They are still excessively deferential to him, but the vibe I get from hate-browsing lesswrong is that the majority of the rationalists there put much lower odds on AI doom? (It's hard to tell exactly because Eliezer has avoided committing to timelines or hard probabilities on AI doom despite all his talk about putting probabilities on everything in the sequences). Lesswrong occasionally references his tweets but not that often. Like I think sneerclub actually references them more often?

[–] scruiser@awful.systems 2 points 2 weeks ago

Someone just made a top post condemning the Molotov but defending and normalizing Eliezer: https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence

[–] scruiser@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

A rationalist made a top post where they (poorly) argue against political "violence" (scare quotes because they lump in property damage): https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence

Highlights include a shallow half-assed defense of dear leader Eliezer's calls for violence:

True, Eliezer Yudkowsky’s TIME article called on the state to use violence to enforce AI policies required to prevent AI from destroying humanity. But it’s hard to think of a more legitimate use of violence than the government preventing the deaths of everyone alive.

Eliezer called for drone strikes against data centers even if it would start a nuclear war and even against countries that aren't signatories to whatever hypothetical international agreement against AI there is. That is extremely irregular by the standards of international law and diplomacy, and this lesswronger just elides over those little details

Violence is not a realistic way to stop AI.

(Except for drone strikes and starting a nuclear war.)

They treat a Molotov thrown at Sam Altman's house as if it were thrown directly at Sam himself:

as critics blamed the AI Safety community for the attacker who threw a Molotov cocktail at Sam Altman

This is a pretty blatant misrepresentation of the action which makes it sound much more violent.

They continue on with minimizing right-wing violence:

Even if there are occasional acts of political violence like the murders of Democratic Minnesota legislators or Conservative pundit Charlie Kirk, we don’t generally view them as indicting entire movements, but as the acts of deranged individuals.

Actually, outside of right-wing bubbles (and right-wing sources masking themselves as centrist), lots of people actually do blame Trump and the leaders of entire right wing movement as at fault for a lot of recent political violence. Of course, this is lesswrong, which has a pretty cooked Overton window, so it figures the lesswronger would be wrong about this.

Following that, the lesswronger acknowledges it is kind of questionable and a conflation of terms to label property damage violence, but then press right on ahead with some pretty weak arguments that don't acknowledge why some people want to make the distinction.

So in conclusion:

  • drone strikes that start nuclear wars: legitimate violence that is totally logical and reasonable
  • throw a single incendiary at someone's home that doesn't hurt anybody or even light the home on fire: illegitimate violence that must be absolutely condemned without exception
  • (bonus) recent right-wing violence: lone deranged individuals and not the fault of Trump or anyone like that. Everyone is saying it.
[–] scruiser@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Lesswrong is too centrist-brained to ever even hint at legitimizing (non-state-sanctioned) destruction of property as a means of protest or political action. But according to the orthodox lesswrong lore, Sam Altman's actions are literally an existential threat to all humanity, so they can't defend him either. So they are left with silence.

I actually kind of agree with the anarchy-libertarian's response? It is massively down voted.

This is just elevating your aesthetic preference for what the violence you're advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy's house is counterproductive to the goal of "bombing the datacenters" is a better argument, though one I do not believe.

Bingo. Dear leader Yudkowsky can ask to bomb the data centers, and as long as this action goes through the US political process, that violence is legitimate, regardless of how ill-behaved the US is or it's political processes degraded from actually functioning as a democracy.

[–] scruiser@awful.systems 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

garner sympathy

He posted a response that criticizes the recent news article about him (of all the times he has acted like a sociopathic liar), and HN was eating it up, so even if this isn't an intentional false flag he is still playing it that way pretty effectively.

[–] scruiser@awful.systems 2 points 2 weeks ago

The collapse of the current American management of global supply chains isn't exactly an optimistic expectation, but I guess it beats social media continuing as it is into the future and maybe a better global order will develop in the aftermath.

[–] scruiser@awful.systems 4 points 2 weeks ago

No77e is correctly noting the discrepancy between the rationalist obsession with eugenics and the belief in an imminent (or even the next 40 years) technological singularity, but fails to realize that the general problem is the eugenics obsession of rationalists. It is kind of frustrating how close but far they are from realizing the problem.

Also, reminder of the time Eliezer claimed Genesmith's insane genetic engineering plan was one of the most important projects in the world (after AI obviously): https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=fxnhSv3n4aRjPQDwQ Apparently Eliezer's plan if we aren't all doomed by LLMs is to let the genetically engineered geniuses invent friendly AI instead.

[–] scruiser@awful.systems 2 points 2 weeks ago* (last edited 2 weeks ago)

It's a good blog series.

But just to point it out... note the author still buys the AI hype too much. This post is criticizing Microsoft for missing out because OpenAI made that $300 billion deal with Oracle (with the assumption that Microsoft could have a similar amount of revenue from OpenAI instead). Except neither OpenAI nor Oracle has the money or means to carry out that deal, Oracle is struggling to raise the capital to fulfill their end and an analysis of time to bring data centers online suggest they can't meet their target goals even with the money, and OpenAI doesn't have the money to pay for their end, the revenue just isn't coming in unless they somehow become more ubiquitous and lucrative than the entire market for, for example, all streaming services put together (thanks to Ed Zitron for that fun comparison).

view more: ‹ prev next ›