this post was submitted on 23 Oct 2025
64 points (98.5% liked)

technology

24062 readers
196 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Everyone disliked that.

you are viewing a single comment's thread
view the rest of the comments
[–] DirtyPair@hexbear.net 44 points 3 days ago (6 children)

very silly to be upset about this policy

Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.

Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.

the code is going to be held to the same standards as always, so it's not like they're going to be blindly adding slop i-cant

you can't stop people from using LLMs, how would you know the difference? so formalizing the process allows for better accountability

[–] Lyudmila@hexbear.net 25 points 3 days ago (1 children)

Yeah, AI enthusiastic coders are out there and you can't just pretend that nobody is ever going to submit AI generated code and try to pass it off as their own. This seems to be a pretty reasonable way of ensuring that there's accountability and hopefully disclosure.

Obviously I'd prefer that nobody submits vibe code but good luck stopping them all.

[–] robot_dog_with_gun@hexbear.net 11 points 3 days ago (1 children)
[–] Lyudmila@hexbear.net 12 points 3 days ago

a good time was had by all

[–] invalidusernamelol@hexbear.net 21 points 3 days ago (1 children)

I think having a policy that forces disclose of LLM code is important. It's also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can't hold the author accountable, the code should not have any permissions or be packaged with the OS.

I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.

[–] Abracadaniel@hexbear.net 9 points 3 days ago (1 children)

but you can hold the author accountable, that's what OP's quoted text is about.

I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.

Even if they can still be held accountable, I don't think it's a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.

[–] kristina@hexbear.net 17 points 3 days ago (4 children)

The AI hate crowd are getting increasingly nonsensical, I see it as any other software, if it's more open that's the best.

[–] hello_hello@hexbear.net 28 points 3 days ago (1 children)

Me getting links to AI generated wikis where nearly all the information is wrong but of course I'm overreacting because AI is only used by real professionals who already are experts in their domain. I just need to wait 5 years and it'll probably be only half wrong.

[–] kristina@hexbear.net 9 points 3 days ago

It's under the same process for validation, criticize on each case.

[–] Keld@hexbear.net 29 points 3 days ago (1 children)

The AI hate crowd are getting increasingly nonsensical

Sure this isn't true, but go off.

[–] kristina@hexbear.net 8 points 3 days ago (1 children)

🙄 I see y'all's posts I'm old not insensible

[–] Keld@hexbear.net 18 points 3 days ago

Old and wrong is possible.

[–] Kefla@hexbear.net 16 points 3 days ago

Sure, it's just software. It's useless software which makes everything it is involved in worse and it's being shoved into everything to prop up the massive bubble that all the tech companies have shoveled all their money into, desperate for any actual use case to justify all their terrible 'investments.'

[–] BodyBySisyphus@hexbear.net 23 points 3 days ago (1 children)

I'm still working on a colleague's AI generated module that used 2,000 lines of code to do something that could've been done in 500. Much productivity is being had by all.

[–] bobs_guns@lemmygrad.ml 3 points 1 day ago

If you wrote 4x the code it must be 4x as good!

[–] kungen@feddit.nu 10 points 3 days ago (1 children)

the code is going to be held to the same standards as always, so it's not like they're going to be blindly adding slop

But you think it's okay that the reviewers should waste their time reviewing slop?

I've had to spend so much time the last few months reviewing and refactoring garbage code, all because corporate whitelisted some LLMs. This group I've worked with for many years used to be really competent developers, but they've all become blind. It's a tragedy.

how would you know the difference?

Maybe you can't, but it's very obvious in many cases.

[–] DirtyPair@hexbear.net 14 points 3 days ago (1 children)

But you think it's okay that the reviewers should waste their time reviewing slop?

fantasy world where if they simply make rules where you're Not Allowed to submit LLM code nobody will

Maybe you can't, but it's very obvious in many cases.

so not all cases? don't waste my time

[–] kungen@feddit.nu 9 points 3 days ago (1 children)

Oh yeah, I forgot that it's smart to get rid of all rules and laws and such, simply because some folk will disregard it.

[–] aanes_appreciator@hexbear.net 3 points 2 days ago

The thing is it'll probably be fine for the end product beyond the wave of codeslop that will be brought to the project once the shitty vibe coders hear the news. that's just more work for the volunteers but you're right that it isn't really that different of a policy in practice

[–] Abracadaniel@hexbear.net 12 points 3 days ago

seems sensible.