nightsky

joined 1 year ago
[–] nightsky@awful.systems 7 points 9 hours ago (2 children)

Very impressed with this comment from the creator of the Zig programming language, regarding dealing with AI slop submissions, and generally about LLMs for coding.

I should look into Zig again! Technically, I've always leaned more towards Rust, because I like its more uncompromising approach to safety, while Zig always seemed to me a bit more middle-of-the-road on that. But I've been disappointed about how wide-spread LLM usage has become in Rust circles, I fear that its culture might tip over in favor of slop. (But it's not there yet and I hope it won't happen!)

Anyway, I'm ordering the "Introduction to Zig" book...

[–] nightsky@awful.systems 3 points 1 day ago* (last edited 1 day ago) (1 children)

This could be regarded as a neat fun hack, if it wasn’t built by appropriating the entire world of open source software while also destroying the planet with obscene energy and resource consumption.

And not only do they do all that… it’s also presented by those who wish this to be the future of all software. But for that, a “neat fun hack” just isn’t enough.

Can LLMs produce software that kinda works? Sure, that’s not new. Just like LLMs can generate books with correct grammar inside, and vaguely about a given theme. But is such a book worth reading? No. And is this compiler worth using? Also no.

(And btw, this approach only works with an existing good compiler as “oracle”. So forget about doing that to create a new compiler for a new language. In addition, there’s certainly no other language with as many compilers as C, providing plenty of material for the training set.)

[–] nightsky@awful.systems 10 points 2 days ago

there isn’t a simple solution to this

How about just not creating the problem in the first place. How about that.

[–] nightsky@awful.systems 3 points 4 days ago (1 children)

This was very enjoyable! Actually was over too quickly, would have liked to hear you two talk about AI stuff more.

[–] nightsky@awful.systems 4 points 5 days ago

When Woke 2 comes

ooh, please tell me there's a release date already

[–] nightsky@awful.systems 6 points 6 days ago

Thanks everyone for the replies <3 Guess I should make an account there after all… bleeeh :/

[–] nightsky@awful.systems 6 points 6 days ago (5 children)

Honest question, since I’m not on linkedin (and kinda looking for a new job): does it really help anyone find a job? It has been my impression from the outside that it’s mostly empty drivel.

[–] nightsky@awful.systems 8 points 6 days ago (7 children)

I’m confused that anyone thinks that the world needs another linkedin…

[–] nightsky@awful.systems 8 points 1 week ago

Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.

[–] nightsky@awful.systems 13 points 1 week ago (4 children)

When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable

During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.

Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.

Source

[–] nightsky@awful.systems 5 points 1 week ago (1 children)

Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

I wouldn't go as far as using the "AI psychosis" term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don't believe that's a weakness of human psychology, but that it's what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I'm sure it's not good. Time for me to be a total dork and cite an anime quote on human interaction: "I create them as they create me" -- except that with LLMs, it actually goes only in one direction... the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

while atrophying empathy

This possibility is to me actually the scariest part of your post.

[–] nightsky@awful.systems 8 points 1 week ago

Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.

view more: next ›