1
submitted 5 months ago* (last edited 5 months ago) by BigMuffin69@awful.systems to c/sneerclub@awful.systems

top 13 comments
sorted by: hot top controversial new old
[-] blakestacey@awful.systems 1 points 5 months ago

Quoth Yud:

There is a way of seeing the world where you look at a blade of grass and see "a solar-powered self-replicating factory". I've never figured out how to explain how hard a superintelligence can hit us, to someone who does not see from that angle. It's not just the one fact.

It's almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after "these edibles ain't shit" is, I dunno, bad?

[-] corbin@awful.systems 1 points 5 months ago

He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.

[-] Collectivist@awful.systems 0 points 5 months ago

He wanted to be the foundation, but he was scaffolding

That's a good quote, did you come up with that? I for one would be ecstatic to be the scaffolding of a research field.

[-] corbin@awful.systems 1 points 5 months ago

That's 100% my weird late-night word choices. You can reuse it for whatever.

I agree with your sentiment, but the wording is careful. Scaffolding is inherently temporary. It only is erected in service of some further goal. I think what I wanted to get across is that Yud's philosophical world was never going to be a permanent addition to any field of science or maths, for lack of any scientific or formal content. It was always a farfetched alternative fueled by science-fiction stories and contingent on a technological path that never came to be.

Maybe an alternative metaphor is that Yud wanted to develop a new kind of solar panel by reinventing electrodynamics and started by putting his ladder against his siding and climbing up to his roof to call the aliens down to reveal their secrets. A decade later, the ladder sits fallen and moss-covered, but Yud is still up there, trapped by his ego, ranting to anybody who will listen and throwing rocks at the contractors installing solar panels on his neighbor's houses.

[-] Shitgenstein1@awful.systems 1 points 5 months ago

A year and two and a half months since his Time magazine doomer article.

No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

Just another note in a panic that accomplished nothing.

[-] fartsparkles@sh.itjust.works 0 points 5 months ago

It’s also a bunch of brainfarting drivel that could be summarized:

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

Or

Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

[-] Architeuthis@awful.systems 0 points 5 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] barsquid@lemmy.world 0 points 5 months ago

This guy is going to be very upset when he realizes that there is no absolute morality.

[-] AcausalRobotGod@awful.systems 0 points 5 months ago

A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

[-] froztbyte@awful.systems 1 points 5 months ago

yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working

[-] carlitoscohones@awful.systems 1 points 5 months ago

Starting a wall of text with a non sequitur is a bold strategy. I cannot follow his 9/11 logic at all.

[-] Soyweiser@awful.systems 0 points 5 months ago* (last edited 5 months ago)

We get it, we just don't agree with the assumptions made. Also love that he is now broadening the paperclips thing into more things, missing the point of the paperclips thing abstracting from the specific wording of the utility function (because like with disaster prepare people preparing for zombie invasions, the actual incident doesn't matter that much for the important things you want to test). It is quite dumb, did somebody troll him by saying 'we will just make the LLM not make paperclips bro?' and he got broken so much by this that he is replying up his own ass with this talk about alien minds.

e: depressing seeing people congratulate him for a good take. Also "could you please start a podcast". (A schrodinger's sneer)

[-] BigMuffin69@awful.systems 1 points 5 months ago* (last edited 5 months ago)

did somebody troll him by saying ‘we will just make the LLM not make paperclips bro?’

rofl, I cannot even begin to fathom all the 2010 era LW posts where peeps were like, "we will just tell the AI to be nice to us uwu" and Yud and his ilk were like "NO DUMMY THAT WOULDNT WORK B.C. X Y Z ." Fast fwd to 2024, the best example we have of an "AI system" turns out to be the blandest, milquetoast yes-man entity due to RLHF (aka, just tell the AI to be nice bruv strat). Worst of all for the rats, no examples of goal seeking behavior or instrumental convergence. It's almost like the future they conceived on their little blogging site shares very little in common with the real world.

If I were Yud, the best way to salvage this massive L would be to say "back in the day, we could not conceive that you could create a chat bot that was good enough to fool people with its output by compressing the entire internet into what is essentially a massive interpolative database, but ultimately, these systems have very little do with the sort of agentic intelligence that we foresee."

But this fucking paragraph:

(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered “much more likely” while still being not likely enough.)

ah, the sweet, sweet aroma of absolute copium. Don't believe your eyes and ears people, LLMs have everything to do with AGI and there is a smol bean demon inside the LLMs that is catastrophically misaligned with human values that will soon explode into the super intelligent lizard god the prophets have warned about.

this post was submitted on 15 Jun 2024
1 points (100.0% liked)

SneerClub

983 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS