Guidance on using AI in Contributions

High confidence you, unman, are up for the challenge. But have the nagging concern “Be careful what you wish for”.Only wondering if future qubes mods/contiribs will be as capable as you. Nevertheless, glad you all are trying to get ahead of the potential challenges with

"

Great verbiage. Will enjoy watching attempts at enforcement. I mean no criticism or critique but only have genuine interest.

:100: Will be fascinated to see how this is moderated.

I hope not, but I’d be super surprised if if doesn’t create an incredible burden on the mods and the community generally. Hope I’m wrong and unman has given this plenty of forethought … which historically has almost always been the case.

Not being sarcastic, but has that worked in the past? Meaning only ideas of what constitutes responsibility or the effectiveness of “sanctions”?
If it has I likely missed it, even though I certainly agree with the premise.

Thanks for the note, agree too, just curious about the feasibility of identification and enforement.

And I welcome observing your efforts. If anyone can do it, it is probably you and the qubes (mod) team. Will be fascinating to watch…best regards and “luck”.

Very eager to see how this progresses / regresses. Fairly certain we will all learn something from the attempt(s). No disrespect intended to anyone. Expect this to be another fascinating exercise to observe.

Kind regards and best wishes to all here…

1 Like

@Confused

Not being sarcastic, but has that worked in the past?

I don’t know.

Meaning only ideas of what constitutes responsibility or the effectiveness of “sanctions”?

Meaning how will the “must” be enforced, considering there is no way to actually know.

1 Like

I don’t quite understand the problem. Current (very reasonable in my opinion) guidelines allow AI-generated contributions as long as they are reasonable and transparent. Which alternative approach would you (and @qubist) suggest and what could be the difference in enforcement?

In my understanding, with such guidelines, well-meaning people would disclose the use of AI, helping us to (learn how to) distinguish AI contributions. If AI contributions were completely forbidden, we would loose some good contributions. Also, we would have exactly the same problem of enforcement.

3 Likes

The guidance clears up the situation, and that is good, in my opinion, because it will help authors to act responsibly. I see mainly two areas where it will be helpful:

  • For authors, AI is just a tool to create content. As with other tools, it is the author’s responsibility to check carefully what is created before the result is committed. The special problems in this case are that errors can happen, as with other tools, but that these errors may be rather difficult to detect and verify. All the more, it is essential that the author’s review is very thorough and that the author is aware of the kind of errors that AI may generate. Like with an editor, the errors may be syntactic and can be found easily, but AI may create plausible-looking nonsense that can be detected only if the author understands in depth what is written. This may be time-consuming and burdensome, but it is absolutely required.

  • Stating what was created using AI is helpful for the reviewers, telling them to look for AI-typical errors. So I think that the author must tell the reviewer if and where AI was used. Requiring this in the policy is a good thing because it will help in the review process, especially in avoiding unnoticed, easy-to-miss AI errors, so it will help to maintain a good quality of the documentation.

In any case, the responsibility for the quality of the produced material lies with the author, no matter which tools are used.

On the other hand, I see no necessity to show the use of AI in the final documentation. If the authors were working carefully and the reviewer has the chance to check for AI-typical errors, the result should have a high quality, no matter how it was obtained.

One more point should be considered: Meanwhile, the border between AI and non-AI begins to blur. For instance, the grammar checker used when typing text, or a translation service, may use AI or work more conventionally, and often, you have no chance to find this out. And, worse still, manufacturers are beginning to put AI even in tools like editors or office software, which has already produced (not-so) funny effects.

3 Likes

@fsflover

I don’t quite understand the problem. Current (very reasonable in my opinion) guidelines allow AI-generated contributions as long as they are reasonable and transparent.

You repeat reasonable, so let’s reason rather than fantasize about wishful impossibilities.

Is it possible to stop people from using AI? - No.
Is AI dangerous? - See the links above.
Is it possible to discourage danger, rather than provide welcoming “guidance” about it? - Yes.

Which alternative approach would you (and @qubist) suggest and what could be the difference in enforcement?

Not encouraging brain deterioration. Obviously, the central issue is content quality and content that is indistinguishable from human-created one would be accepted, with or without disclosure. In that sense, what you call transparency becomes unnecessary, even meaningless.

The biggest issue I see is the amount of “reasonably good” content that may start to penetrate this wonderful community. The second biggest issue is the extra energy the team would have to waste to review it. Currently, there are thousands of unresolved issues on GitHub, i.e. lots of work waiting to be done. Adding more noise in a system is unlikely the way to keep it more orderly.

Otherwise, at certain point in time, it may turn out necessary that the team needs to offload to AI moderation of content (generated by AI and humans), so AI will decide what is “reasonably good”. Fast forward and let’s remove humans - it’s more efficient and at zero cost. Less CO2 and all that ideology.

In my understanding, without such guidelines, well-meaning people would disclose the use of AI, helping us to (learn how to) distinguish AI contributions. If AI contributions were completely forbidden, we would loose good some contributions. Also, we would have exactly the same problem of enforcement.

Your presumption is that AI contributions are good contributions. I question (the effect of) that.

2 Likes

I do not think this is going forward fruitfully.
The team have set out a policy. Let’s see how it is applied, and whether
the effects are good or bad.
I see no prospect of any one in Qubes being tempted to use AI for moderation, of
code or documentation.

Any one who is worried by the prospect of the project being overtaken by
AI slop is encouraged to make a human contribution. As discussed in
another thread, there are many areas in which you can contribute.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

4 Likes
1 Like

I dont think you understand what that is. But you’re right, I am trying
to bring the discussion to a close for the reasons I gave.

Let’s agree to disagree. (Do you see what I did there?)

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

1 Like

I dont think you understand what that is.

Because I am stupid.

1 Like

Correct.

Yes, it can be. It can also be useful.

This is exactly what the current guidance does.

Depending on how exactly you use AI, it can augment your brain, not just replace it. It’s your choice.

It doesn’t become meaningless. By forbidding all AI contributions, you will loose all useful contributions from well-behaving people. Which may be a big loss.

We just need to raise the bar of “reasonably good” to make it a non-problem. Or not do anything, since the bar is already set and the forum works fine as it is?

How does it affect the effort? I don’t understand. AI attacks will occur whether there is guidance or not.

Who is adding noise and how?

No. AI contributions can be good contributions.

2 Likes

@fsflover

Your whole post is supporting the conclusion that:

AI contributions can be good contributions.

I said:

AI-generated content can surpass the quality of human-generated one and can be indistinguishable from it to the point that even incorrect information may appear correct.

Note the second part of the sentence - that is the concern I am addressing, which is why I also said:

Your presumption is that AI contributions are good contributions. I question (the effect of) that.

This means I am questioning what “good” means beyond merely perceived quality of a particular piece of code, artwork, forum post, etc.

Depending on how exactly you use AI, it can augment your brain, not just replace it. It’s your choice.

If anyone is buying that false dilemma and prefers a community of augmented brains, that is what they will receive… eventually.

It doesn’t become meaningless. By forbidding all AI contributions, you will loose all useful contributions from well-behaving people. Which may be a big loss.

You are missing the point. Transparency requirement is loosing its meaning because (as has been confirmed) all that matters is content quality, with or without the unenforceable “generated with AI” notice. The latter seems just a wishful convenience for the reviewer and can easily be abused.

Forbidding AI has nothing to do with that. Whether it may or may not be “a big loss” needs to be proven.

How does it affect the effort? I don’t understand.

“generate large amounts of content quickly and at near-zero cost”

Who is adding noise and how?

Augmented brains.

Slippery slope - Wikipedia

I said “it may”, not “it will”.

1 Like

You’re not wrong.
(Except augmenting the brain. Free software is the solution. But it’s off-topic and not connected to Qubes OS.)

I only don’t understand one thing: How any different guidance would change any of that? It will not change intentional bad behavior. It will not change “large amounts of [generated] content” from bad actors.

It’s true. So the only result of that is changing the behavior of good actors and – whenever caught – reason to ban bad actors. I see no problem here. No downsides.

OK, there will be some loss, as some people will certainly want to contribute something useful to Qubes with AI and will be discouraged. How can you argue with this?

2 Likes

@fsflover

I only don’t understand one thing: How any different guidance would change any of that? It will not change intentional bad behavior. It will not change large amounts of [generated] content from bad actors.

Declarative policy does not change human behaviour. The way it is enforced may change it.

OK, there will be some loss, as some people will certainly want to contribute something useful to Qubes with AI and will be discouraged. How can you argue with this?

I said it needed to be proven. So far, all I have seen here is some AI-generated forum posts and I think that particular contribution was not particularly useful to anyone, was it?

You can probably provide actual examples of how exactly using AI for a particular contribution may be better than not using it in that same contribution. The only example I can think of is reverse engineering of proprietary blobs.

2 Likes

You should look at some of the excellent work done by @GWeck using AI
tools. They have explained in posts how they use AI.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

2 Likes

@unman

Can you share a link to that?

1 Like

Probably this: https://forum.qubes-os.org/t/improve-qubesos-guides-and-documentation-using-ai/38671/54
and this:
https://forum.qubes-os.org/t/how-are-you-guys-using-ai/39418/23

2 Likes

@fsflover

Thanks.

Would you say not having those particular graphics would be a big loss, as discussed earlier?

https://forum.qubes-os.org/t/how-are-you-guys-using-ai/39418/23

“My conclusion is that AI may help with specific, structured tasks, but using an LLM for really complicated, creative work can produce more problems and more work than doing these things using your own natural intelligence.”

Spot on.

1 Like

Not a big loss for technicians, but there are managers who may not understand anything else…

2 Likes

@GWeck

Not a big loss for technicians, but there are managers who may not understand anything else…

Forbes may print a cover “Qubes OS - the next tool for every manager!” :slight_smile:

2 Likes

I wish they would.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

3 Likes