sinedpick

joined 2 years ago
[–] sinedpick@awful.systems 11 points 1 week ago (1 children)

a hackernews vibe-codes their entire desktop environment, half in rust and half in ... x86 assembly. I'm thinking why waste the tokens on assembly and not just get the LLM to spit out machine code? Maybe also invent some kind of standardized way of telling the LLM what sequence of machine code instructions to spit out based on the behavior of the software I want, you know, to save tokens. We can call it "GCC", the "generalized computer controller".

[–] sinedpick@awful.systems 7 points 2 months ago (1 children)

can all of rationalism be reduced to logorrhea with load-bearing extreme handwaving (in this case, agentic self preservation arises through RL scaling)?

[–] sinedpick@awful.systems 12 points 4 months ago (7 children)

got my Urbit newsletter for this quarter (or whatever the fuck the cadence is) and what stood out to me this time was nockchain.org. I was going to sit and do a deep dive to come up with sneers for this but I just don't have the executive function right now. @self thoughts?

[–] sinedpick@awful.systems 12 points 4 months ago

Sean Munger, my favorite history YouTuber, has released a 3-hour long video on technology cultists from railroads all the way to LLMs. I have not watched this yet but it is probably full of delicious sneers.

[–] sinedpick@awful.systems 8 points 5 months ago* (last edited 5 months ago) (1 children)

this is very fashtech coded, happy to be proven wrong though.

[–] sinedpick@awful.systems 6 points 5 months ago

Blog von Marcus Seyfarth, LL.M.

LOL

[–] sinedpick@awful.systems 14 points 5 months ago* (last edited 5 months ago)

I think we already lost the plot when we started relying on a centralized entity (Google) to "index the world's information and make it useful". Ad-tech already fucked up all of the incentives, making recipe sites fill their pages with bullshit in hopes of wiping my eyeballs with messages from third parties hungry for attention. I fucking hate this world.

[–] sinedpick@awful.systems 6 points 6 months ago* (last edited 6 months ago)

I don't doubt you could effectively automate script kiddie attacks with Claude code. That's what the diagram they have seems to show.

The whole bit about "oh no, the user said weird things and bypassed our imaginary guard rails" is another admission that "AI safety" is a complete joke.

We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response.

there it is.

Does this article imply that Anthropic is monitoring everyone's Claude code usage to see if they're doing naughty things? Other agents and models exist so whatever safety bullshit they have is pure theater.

[–] sinedpick@awful.systems 13 points 6 months ago (1 children)

First comment: "the world is bottlenecked by people who just don't get the simple and obvious fact that we should sort everyone by IQ and decide their future with it"

No, the world is bottlenecked by idiots who treat everything as an optimization problem.

[–] sinedpick@awful.systems 4 points 6 months ago

wait this isn't a joke this is a yc funded startup

[–] sinedpick@awful.systems 15 points 6 months ago (11 children)

Ugh. Hank Green just posted a 1-hour interview with Nate Soares about That Book. I'm halfway through on 2x speed and so far zero skepticism of That Book's ridiculous premises. I know it's not his field but I still expected a bit more from Hank.

A YouTube comment says it better than I could:

Yudkowsky and his ilk are cranks.

I can understand being concerned about the problems with the technology that exist now, but hyper-fixating on an unfalsifiable existential threat is stupid as it often obfuscates from the real problems that exist and are harming people now.

[–] sinedpick@awful.systems 8 points 7 months ago* (last edited 7 months ago)

Nothing screams "celebration of creativity" like a nice heaping tablespoon of AI slop images.

view more: next ›