Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 10 points 1 week ago (1 children)

In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal.

Wow it's almost like alignment and AI ethics studies is less a serious academic field and more like a prank capital likes to play on consumers.

But I also think Zhao Tingyang's take that alignment will make AI evil because people are evil falls too much into the the-people-deserve-to-be-disempowered totalitarian state funny business side of things to be especially influential down these parts.

[–] Architeuthis@awful.systems 3 points 1 week ago (1 children)

I assume that's more because they lifted zootopia's art style wholesale and so that's just how rabbits are now.

[–] Architeuthis@awful.systems 6 points 1 week ago* (last edited 1 week ago)

It's not so much he fails the purity test than it is he thinks all gen-ai works like whisper on a laptop and open source LLMs grow on trees.

edit: and also that big company engineers are aware of the limitations and are encouraged to be discreet in using them, like what

[–] Architeuthis@awful.systems 4 points 1 week ago (3 children)

CEV is what he would want if he were wiser and less confused

Isn't that just steelmanning?

I gathered the "idealized version of myself" was because it's supposed to be applied to a superintelligence, because of course it's an alignment thing.

[–] Architeuthis@awful.systems 9 points 1 week ago* (last edited 1 week ago) (6 children)

I checked it out because I was curious if CEV was some international relations initialism I'd never heard of, turns out its just My Guess About What He Wants in rationalese.

Excerpt from the definition of Coherent Extrapolated Volition, or how to damage your optical nerve from too much eye rolling:Extrapolated volition is the metaethical theory that when we ask "What is right?", then insofar as we're asking something meaningful, we're asking "What would a counterfactual idealized version of myself want* if it knew all the facts, had considered all the arguments, and had perfect self-knowledge and self-control?" (As a metaethical theory, this would make "What is right?" a mixed logical and empirical question, a function over possible states of the world.)

A very simple example of extrapolated volition might be to consider somebody who asks you to bring them orange juice from the refrigerator. You open the refrigerator and see no orange juice, but there's lemonade. You imagine that your friend would want you to bring them lemonade if they knew everything you knew about the refrigerator, so you bring them lemonade instead. On an abstract level, we can say that you "extrapolated" your friend's "volition", in other words, you took your model of their mind and decision process, or your model of their "volition", and you imagined a counterfactual version of their mind that had better information about the contents of your refrigerator, thereby "extrapolating" this volition.

[–] Architeuthis@awful.systems 5 points 1 week ago

Richard Lynn was right

Ah yes, the everyone in the continent of Africa and parts of Asia is secretly heavily developmentally disabled, my friend Cremieux who's definitely a highly accredited biologistician and not a college drop out who's also a nazi thinks this as well post.

Re the incel stuff I think the regulars grew older so it doesn't come up as much outside the comments, which remain a safe space for this type of whining.

It's not really extricable from the eugenics iinspired bioessentialism that's encouraged there I think.

[–] Architeuthis@awful.systems 5 points 1 week ago* (last edited 1 week ago)

The MCP thing feels like an I like to leave my keys as a huge bulge under the welcome mat type vulnerability. It seems really easy to not do that and also something that is kind of out of scope for both lock makers and mat salesmen to address directly.

Maybe the MCP ecosystem is such that it's hard to both avoid this and keep the impression that you're doing magic and not just implementing a heavily annotated API, hopefully secured and with specific and well-defined functionality, and also they are all hacks.

[–] Architeuthis@awful.systems 5 points 1 week ago

Back in the old days if you got found out for a race science and men's rights internet instigator it was possible it might actually have negative real-life implications.

Sigh.

[–] Architeuthis@awful.systems 5 points 2 weeks ago

Rationalists tend to lean more towards anime villain that Bond villain, but yeah.

[–] Architeuthis@awful.systems 4 points 2 weeks ago (2 children)

just one more data trove bro

Are new data-hungry players entering the market of are we still pretending that shoveling more social media posts to the data furnace will somehow overcome structural limitations?

[–] Architeuthis@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

Unless he specifies his problem was with ostensibly leftist academics being specifically too dismissive of race science and incelist tropes this is worthless, just run of the mill face-leopard schadenfreude.

Also the second half (the what? what's the cut-off point?) of his career has been if anything more mask off, and it's not like he stopped whining about woke after posting a half-hearted disapproval of trump like three days before the election after years of writing about how cool it would be if there was less regulation especially for healthcare.

[–] Architeuthis@awful.systems 5 points 2 weeks ago (1 children)

Reading Heinlein as a kid isn't even especially notable, but it's Yud so he definitely means the polyamory advocacy stuff specifically.

 

For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

  1. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

 

rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

ssc reddit thread

quantian's short writeup on the birdsite, will post screens in comments

pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

edit: added additional details to the pdf descriptions.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

view more: ‹ prev next ›