Fedora Council has approved the latest version of the AI-Assisted Contributions policy

@renehoj

I just don’t understand why there always are people who think every technological advance is going to be the end of the world.

The concern is security.

As for the technological advances:

During the last 50 years of the 20th century there have been only 3 major technological inventions: mobile phone, computer, Internet. All 3 are means for social control.

This is taken directly from the proposal.

AI tools may be used to assist human reviewers by providing analysis and suggestions. However, an AI MUST NOT be the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters).

Consider also:

1 Like

Not really enforceable, also technically there are no requirement for human being in the chain. AI together with my pet fish (as the final arbiter) can legitimately review fedora project code now, and the only thing that might stop their show is them doing a bad job.

Just in case, here’s a less silly depiction of the weakness of this policy:
Someone might abuse it by using very weak final arbiter, thus at the same time making AI non-sole and non-final arbiter, but giving it this function regardless. Vibe reviewing, if you want.

Also this abuse might not be intentional or malicious, even if reviewer is genuinely trying their best they might get lost in the complexity and end up with workflow that functionally auto-accepts AI review

Weird that they decided to word it in such a way. Whitelisting humans for review is simpler than blacklisting all forms of automation. Maybe they’re going somewhere with this, idk

1 Like

At the same time I kind of get it. Forcing human-only review may put a significant bottleneck on otherwise prolific LLM code generation.

Yeah and code review is boring

2 Likes

You can say that again! :laughing:
(but please don’t)

1 Like