YourNetworkIsHaunted

joined 2 years ago

I feel like this is going to be a pretty common cope line for rationalists that face an increasing social cost for associating with a technofascist AI cult. I'm sure some of that is legitimate, in that there's been a kind of dead sea effect as people who aren't okay with eugenics stop hanging out in rationalist spaces, making the space as a whole more openly racist. But in terms of the thought leaders and the "movement" as a whole, I can't think of any high-profile respected rat figures who pushed back against the racists and lost. All the pushback and call-outs came from outside the ratsphere. In as much as the racists "won" it was a fight that never actually happened.

Especially considering that the whole "your AI will negotiate with theirs" speaks to the kind of algorithmic price discrimination that you see in Uber and the like, where the system is designed specifically to maximize how much you're willing and able to pay for a ride and minimize how much the driver is willing to accept for it. Hardcore techno libertarians want nothing more than to make it impossible for anyone to make meaningful informed choices about their lives that might prevent them from being taken advantage of by hardcore techno libertarians.

[–] YourNetworkIsHaunted@awful.systems 5 points 11 hours ago (1 children)

Word problems referring to aliens from cartoons. "Bobbby on planet Glorxon has four Strawberies, which are similar to but distinct from earth strawberries, and Kleelax has seven..."

I also wonder if you could create context breaks, or if they've hit a point where that isn't as much of a factor. "A train leaves Athens, KY traveling at 45 mph. Another train leaves Paris, FL traveling at 50 mph. If the track is 500 miles long, how long is a train trip from Athens to Paris?

[–] YourNetworkIsHaunted@awful.systems 4 points 2 days ago (1 children)

I mean, I think the relevant difference is that rather than trying to argue against a weak opponent they're trying to validate their feelings of victimization, superiority, and/or outrage by imagining an appropriate foil.

It's a straw man that exists to be effectively venerated rather than torn down.

I don't know if it quite applies here since all the money is openly flowing to nVidia in exchange for very real silicon, but I'm partial to "the bezzle" - referring to the duration of time between a con artist taking your money and you realizing the money is gone. Some cons will stretch the bezzle out as long as possible by lying and faking returns to try and get the victims to give them even more money, but despite how excited the victims may be about this period the money is in fact already gone.

I mean if it gets too hot he could try the traditional fiber arts dodge for internet hate and fake his own death.

"As a scientist..." please stop giving the world more reasons to stuff nerds in lockers.

I would actually contend that crypto and the metaverse both qualify as early precursors to the modern AI post-economic bubble. In both cases you had a (heavily politicized) story about technology attract investment money well in excess of anyone actually wanting the product. But crypto ran into a problem where the available products were fundamentally well-understood forms of financial fraud, significantly increasing the risk because of the inherent instability of that (even absent regulatory pressure the bezzle eventually runs out and everyone realizes that all the money in those 'returns' never existed). And the VR technology was embarrassingly unable to match the story that the pushers were trying to tell, to the point where the next question, whether anyone actually wanted this, never came up.

GenAI is somewhat unique in that the LLMs can do something impressive in mimicking the form of actual language or photography or whatever it was trained on. And on top of that, you can get impressively close to doing a lot of useful things with that, but not close enough to trust it. That's the part that limits genAI to being a neat party trick, generating bulk spam text that nobody was going to read anyways, and little more. The economics don't work out when you need to hire someone skilled enough to do the work to take just as much time double-checking the untrustworthy robot output, and once new investment capital stops subsidizing their operating costs I expect this to become obvious, though with a lot of human suffering in the debris. The challenge of "is this useful enough to justify paying its costs" is the actual stumbling block here. Older bubbles were either blatantly absurd (tulips, crypto) or overinvestment as people tried to get their slice of a pie that anyone with eyes could see was going to be huge (railroad, dotcom). The combination of purely synthetic demand with an actual product is something I can't think of other examples of, at this scale.

Thank you for sharing this bit of internet deep lore. Now I just need to find the four hour youtube video of some ex-GI gun nut explaining in exhausting detail exactly how bullshit every detail of those stories is because whatever the fuck is going on there is fascinating.

[–] YourNetworkIsHaunted@awful.systems 16 points 3 days ago (2 children)

Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.

Rationalists found out that human behavior didn't match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.

I don't even know the degree to which that's the fault of the old hackers, though. I think we need to acknowledge the degree to which a CS degree became a good default like an MBA before it, only instead of "business" it was pitched as a ticket to a well-paying job in "computer". I would argue that a large number of those graduates were never going to be particularly interested in the craft of programming beyond what was absolutely necessary to pull a paycheck.

It's also yet another case of privileged people somehow failing to understand that there is no "right" way to advocate for tearing families apart or purging people who don't fit your ideal. Like, I'm not a fan of political violence but I'm also not going to act like this asshole was any less part of the problem because he ostensibly believed in respecting the state's monopoly on violence as he advocated for that violence to be used against me and mine.

 

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

view more: next ›