scruiser

joined 2 years ago
[–] scruiser@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

Stephen Hawking was starting to promote AI doomerism in 2014. But he's not a Nobel prize winner. Yoshua Bengio is a doomer, but no Nobel prize either, although he is pretty decorated in awards. So yeah looks like one winner and a few other notable doomers that aren't actually Nobel Prize winners somehow became winners plural in Scott's argument from authority. Also, considering the long list of example of Noble Disease, I really don't think Nobel Prize winner endorsement is a good way to gauge experts' attitudes or sentiment.

[–] scruiser@awful.systems 9 points 2 weeks ago (2 children)

He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?

Literally the only difference between Scott's beliefs and AI:2027 as a whole is his ~~prophecy~~ estimate is a year or two later. (I bet he'll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn't happen in 2028.)

Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods... as in Kat Woods... as in a member of Nonlinear, the EA "organization" whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for "hiring" an underpaid (really underpaid, like couldn't afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.

[–] scruiser@awful.systems 5 points 2 weeks ago

Yeah, allowing the framing that blog post uses is already conceding a lot to EA and overlooking the bigger problems they have.

[–] scruiser@awful.systems 6 points 2 weeks ago (5 children)

Yeah I think long term Trump wrecking US soft power might be good for the world. There is going to be a lot of immediate suffering because a lot of those programs were also doing good things (in addition to strengthening US soft power or pushing a neocolonial agenda or whatever else).

[–] scruiser@awful.systems 12 points 2 weeks ago (11 children)

I was just about to point out several angles this post neglects but it looks like from the edit this post is just intended to address a narrower question. Among the angles outside the intended question: philanthropy by the ultra-wealthy often serves as a tool for reputation laundering and influence building. I guess the same criticism can be made about a lot of conventional philanthropy, but I don't think that should absolve EA.

This post somewhat frames the question as a comparison between EA and conventional philanthropy and foreign aid efforts... which okay, but that is a low bar especially when you look at some of the stuff the US has done with it's foreign aid.

[–] scruiser@awful.systems 8 points 2 weeks ago

The prompt's random usage of markup notations makes obtuse black magic programming seem sane and deterministic and reproducible. Like how did they even empirically decide on some of those notation choices?

[–] scruiser@awful.systems 7 points 2 weeks ago

You can make that point empirically just looking at the scaling that's been happening with ChatGPT. The Wikipedia page for generative pre-trained transformer has a nice table. Key takeaway, each model (i.e. from GPT-1 to GPT-2 to GPT-3) is going up 10x in tokens and model parameters and 100x in compute compared to the previous one, and (not shown in this table unfortunately) training loss (log of perplexity) is only improving linearly.

[–] scruiser@awful.systems 6 points 2 weeks ago

He also wants instant gratification, so taking months to have a team put together a racist data set is a lot of effort for him.

[–] scruiser@awful.systems 21 points 2 weeks ago (2 children)

This is especially ironic with all of Elon's claims about making Grok truth seeking. Well, "truth seeking" was probably always code for making an LLM that would parrot Elon's views.

Elon may have failed at making Grok peddle racist conspiracy theories like he wanted, but this shouldn't be taken as proof that LLMs can't be manipulated that way. He probably went with the laziest option possible of directly prompting it as opposed to fine tuning it on racist content or anything more advanced.

[–] scruiser@awful.systems 9 points 2 weeks ago (2 children)

Do you like SCP foundation content? There is an SCP directly inspired by Eliezer and lesswrong. It's kind of wordy and long. And in the discussion the author waffled on owning that it was a mockery of Eliezer.

[–] scruiser@awful.systems 9 points 2 weeks ago

I think they also want recognition/credit for spending 5 minutes (or less) typing some words at an image generator as if that were comparable to people who develop technical skills and then create effortful meaningful work just because the outputs are (superficially) similar.

[–] scruiser@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago)

You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)

view more: ‹ prev next ›