Not really enforceable, also technically there are no requirement for human being in the chain. AI together with my pet fish (as the final arbiter) can legitimately review fedora project code now, and the only thing that might stop their show is them doing a bad job.
Just in case, here’s a less silly depiction of the weakness of this policy:
Someone might abuse it by using very weak final arbiter, thus at the same time making AI non-sole and non-final arbiter, but giving it this function regardless. Vibe reviewing, if you want.
Also this abuse might not be intentional or malicious, even if reviewer is genuinely trying their best they might get lost in the complexity and end up with workflow that functionally auto-accepts AI review
Weird that they decided to word it in such a way. Whitelisting humans for review is simpler than blacklisting all forms of automation. Maybe they’re going somewhere with this, idk