this post was submitted on 23 Oct 2025
64 points (98.5% liked)

technology

24063 readers
245 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Everyone disliked that.

you are viewing a single comment's thread
view the rest of the comments
[–] invalidusernamelol@hexbear.net 21 points 3 days ago (1 children)

I think having a policy that forces disclose of LLM code is important. It's also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can't hold the author accountable, the code should not have any permissions or be packaged with the OS.

I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.

[–] Abracadaniel@hexbear.net 9 points 3 days ago (1 children)

but you can hold the author accountable, that's what OP's quoted text is about.

I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.

Even if they can still be held accountable, I don't think it's a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.