nightsky

joined 1 year ago
[–] nightsky@awful.systems 3 points 14 hours ago

Thanks everyone for the replies <3 Guess I should make an account there after all… bleeeh :/

[–] nightsky@awful.systems 3 points 19 hours ago (4 children)

Honest question, since I’m not on linkedin (and kinda looking for a new job): does it really help anyone find a job? It has been my impression from the outside that it’s mostly empty drivel.

[–] nightsky@awful.systems 5 points 20 hours ago (6 children)

I’m confused that anyone thinks that the world needs another linkedin…

[–] nightsky@awful.systems 8 points 2 days ago

Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.

[–] nightsky@awful.systems 13 points 3 days ago (4 children)

When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable

During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.

Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.

Source

[–] nightsky@awful.systems 5 points 4 days ago (1 children)

Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

I wouldn't go as far as using the "AI psychosis" term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don't believe that's a weakness of human psychology, but that it's what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I'm sure it's not good. Time for me to be a total dork and cite an anime quote on human interaction: "I create them as they create me" -- except that with LLMs, it actually goes only in one direction... the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

while atrophying empathy

This possibility is to me actually the scariest part of your post.

[–] nightsky@awful.systems 8 points 4 days ago

Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.

[–] nightsky@awful.systems 6 points 5 days ago

Are you trying to say that you are not regularly thinking about the meta level of evidence convergence procedures?

[–] nightsky@awful.systems 11 points 6 days ago* (last edited 6 days ago) (3 children)

The AI craze might end up killing graphics card makers:

Zotac SK's message: "(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors."

The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.

Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming "staggering" price increases.

(Source)

I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that it's only because of useless AI data centers?

[–] nightsky@awful.systems 6 points 1 week ago

Bitcoin jesus

I thought you were making a sneer, but then it's an actual name

[–] nightsky@awful.systems 22 points 1 week ago (3 children)

If you watch the video, your brain will leak out your ears. But we watched it so you don’t have to.

Thank you for the invaluable service.

[–] nightsky@awful.systems 6 points 2 weeks ago

A while ago I wanted to make a doctor appointment, so I called them and was greeted by a voice announcing itself as "Aaron", an AI assistant, and that I should tell it what I want. Oh, and it mentioned some URL for their privacy policy. I didn't say a word and hung up and called a different doctor, where luckily I was greeted by a human.

I'm a bit horrified that this might spread and in the future I'd have to tell medical details to LLMs to get appointments at all.

view more: next ›