[-] imadabouzu@awful.systems 16 points 1 week ago

I don't get it. If scaling is all you need, what does a "cracked team" of 5 mean in the end? Nothing?

What's, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as "the power to make anything happen," I always beg, again, "and how is that different precisely from not, that?"

The whole project is tautological.

[-] imadabouzu@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago)

This kind of thing is a fluff piece, meant to be suggestive but ultimately saying nothing at all. There are many reasons to hate Bostrom, just read his words, but this is two philosophers who apparently need attention because they have nothing useful to say. All of Bostrom's points here could be summed up as "don't piss on things, generally speaking."

As for consciousness. Honestly, my brain turns off instantly when someone tries to make any point about consciousness. Seriously though, does anyone actually use the category of "conscious / unconscious" to make any decision?

I don't disrespect the dead (not conscious). I don't bother animals or insects when I have no business with them (conscious maybe not conscious?). I don't treat my furniture or clothes like shit, and am generally pleased they exist. (not conscious). When encountering something new or unusual, I just ask myself, "is it going to bite me?" first. (consciousness is irrelevant) I know some of my actions do harm either directly or indirectly to other things, such as eating, or consuming, or making mistakes, or being. But I don't assume myself a hero or arbiter of moral integrity, I merely acknowledge and do what I can. Again, consciousness kind of irrelevant.

Does anyone run consciousness litmus tests on their friends or associates first before interacting, ever? If so, does it sting?

[-] imadabouzu@awful.systems 17 points 3 weeks ago

Yeah, this lines up with what I have heard, too. There is always talk of new models, but even the stuff in the pipeline not yet released isn't that differentiable from the existing stuff.

The best explanation of strawberry is that it isn't any particular thing, it's rather a marketing and project framing, both internal and external, that amounts to... cost optimizations, and hype driving. Shift the goal posts, tell two stories: one is if we just get affordable enough, genAI in a loop really can do everything (probably much more modest, when genAI gets cheap enough by several means, it'll have several more modest and generally useful use cases, also won't have to be so legally grey). The other is that we're already there and one day you'll wake up and your brain won't be good enough to matter anymore, or something.

Again, this is apparently the future of software releases. :/

[-] imadabouzu@awful.systems 12 points 3 weeks ago

Their minds are open to all ideas, so long as the idea is a closed form solution that looks edgy.

[-] imadabouzu@awful.systems 20 points 3 weeks ago

I kind of wonder if this whole movement of rationalists believing they can "just" make things better than people already in the field comes from the contracting sense that being rich and having an expensive educational background may in fact be less important than having background experience and situational context in the future, two things they loath?

[-] imadabouzu@awful.systems 17 points 4 weeks ago

It's... it's almost as if the law about shareholder value as intended as a metaphor for accountability, not a literal, reductive claim that results in ouroboros. Almost like, our economic system is supposed to be a means, not an end in of itself?

No. Definitely can't be that.

[-] imadabouzu@awful.systems 21 points 4 weeks ago

If they squeeze this rock hard enough, maybe it'll bleed.

[-] imadabouzu@awful.systems 11 points 1 month ago* (last edited 1 month ago)

I'm ok with this because everytime Nick Bostrom's name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, "What the fuck is this shit? And these people are associated with him? Fuck that."

[-] imadabouzu@awful.systems 9 points 1 month ago

It can't stop the usage, it can raise the cost of doing so, by bringing in legal risk of operations operating in a public way. It can create precedence that can be built upon by other parts.

Politics and law move slower than and behind the things it attempts to regulate by design. Which is good, the atlernative is a surveilance state! But it definitely can arrange itself to punish or raise the risk profile of doing something in a certain patterned way.

[-] imadabouzu@awful.systems 10 points 1 month ago

Kurzgesagt

Yeah I'm not surprised. Kurzgesagt has always had that sort of forced, fragile, veneer of optimism and scientific inquiry that can only be described as "all I can imagine about the future I read about in the 60s".

[-] imadabouzu@awful.systems 12 points 1 month ago

It's the same story as has ever been. "Smart People"'s position on anything is often informed by their current economic relationship wrt to the things they care about. And maybe even Yud isn't super happy about his profession being co-opted. What scraps will he have if his own delusions became true about GPT zombies replacing "authentic voices"?

No one is immune to seeing a better take when it's their shit on the line, and no is immune from being in a bubble without stake.

[-] imadabouzu@awful.systems 10 points 1 month ago

Why so general? The multi-agent dynamical systems theory needed to heal internal conflicts such as auto-immune disorders may not be so different from those needed to heal external conflicts as well, including breakdowns in social and political systems.

This isn't, an answer to the question why so general? This is aspirational philosophical goo. "multi-agent dynamical systems theory" => you mean any theory that takes composite view of a larger system? Like Chemistry? Biology?Physics? Sociology? Economics? "Why so general" may as well be "why so uncommitted?"

I feel bayesian rationalism has basically missed the point of inference and immediately fallen into the regression to the mean trap of "the general answer to any question shouldn't say anything in particular at all."

view more: next ›

imadabouzu

joined 2 months ago