antifuchs

joined 2 years ago
[–] antifuchs@awful.systems 2 points 2 hours ago

If Tate loses his job, who’s to say if that’s good or bad

(No but really it’s bad, very bad)

[–] antifuchs@awful.systems 1 points 9 hours ago (1 children)

That can’t be entirely it. Do you call Alexa the voice assistant tool “she”?

[–] antifuchs@awful.systems 4 points 13 hours ago (2 children)

Oh man, if genAI destroys the profession of toxic masculinity podcaster watch me turn booster on a dime

[–] antifuchs@awful.systems 7 points 23 hours ago (9 children)

That is pretty sad to see, but also, who refers to Claude as “he”? This is the second time I’m seeing this and it makes me skin crawl

[–] antifuchs@awful.systems 4 points 4 days ago* (last edited 4 days ago) (3 children)

Mildly positive news: there is a fork of the Zed editor with the llm autocomplete stuff ripped out now: https://gram.liten.app/posts/first-release/

(I’ve used zed with the ai kill switch and really like the buffer/editing ux; but it’s always felt a bit gross, I’m excited to see where the fork goes)

[–] antifuchs@awful.systems 7 points 1 week ago

Yeah, they rebranded when they did the harebrained pivot to focus on cryptocurrencies.

[–] antifuchs@awful.systems 5 points 1 week ago* (last edited 1 week ago) (2 children)

Not… sneer? What is this?!

[–] antifuchs@awful.systems 4 points 2 weeks ago (9 children)

Good news, everyone’s favorite emacs is using AI now: https://www.vim.org/vim-9.2-released.php

[–] antifuchs@awful.systems 8 points 3 weeks ago

It’s a good day to read this announcement and then field a question by a pal why their Spotify playlist plays in reverse

[–] antifuchs@awful.systems 4 points 3 weeks ago (1 children)

Love the idea of having the plagiarism machine do compliance work. The computer takes care of everything!

[–] antifuchs@awful.systems 6 points 1 month ago (1 children)

And of all possible things to implement, they chose Matrix. lol and lmao.

 

Got the pointer to this from Allison Parrish who says it better than I could:

it's a very compelling paper, with a super clever methodology, and (i'm paraphrasing/extrapolating) shows that "alignment" strategies like RLHF only work to ensure that it never seems like a white person is saying something overtly racist, rather than addressing the actual prejudice baked into the model.

 

School student tells AI to put 20 other students’ faces on nude pictures, shares them in chat; it takes months for anyone including the school administrators to act because of some extremely, uh, dubious loophole.

If someone does that in photoshop, it’s a crime; if they do it in AI pretending to be photoshop, it’s somehow not. Gotta love this legal system’s focus on minor technicalities rather than the harm done.

 

They have Nik Suresh (the author) on, as well as Robert Evans. I haven’t listened to it all yet, but it’s fun so far.

view more: next ›