Guidance on using AI in Contributions

High confidence you, unman, are up for the challenge. But have the nagging concern “Be careful what you wish for”.Only wondering if future qubes mods/contiribs will be as capable as you. Nevertheless, glad you all are trying to get ahead of the potential challenges with

"

Great verbiage. Will enjoy watching attempts at enforcement. I mean no criticism or critique but only have genuine interest.

:100: Will be fascinated to see how this is moderated.

I hope not, but I’d be super surprised if if doesn’t create an incredible burden on the mods and the community generally. Hope I’m wrong and unman has given this plenty of forethought … which historically has almost always been the case.

Not being sarcastic, but has that worked in the past? Meaning only ideas of what constitutes responsibility or the effectiveness of “sanctions”?
If it has I likely missed it, even though I certainly agree with the premise.

Thanks for the note, agree too, just curious about the feasibility of identification and enforement.

And I welcome observing your efforts. If anyone can do it, it is probably you and the qubes (mod) team. Will be fascinating to watch…best regards and “luck”.

Very eager to see how this progresses / regresses. Fairly certain we will all learn something from the attempt(s). No disrespect intended to anyone. Expect this to be another fascinating exercise to observe.

Kind regards and best wishes to all here…

1 Like

@Confused

Not being sarcastic, but has that worked in the past?

I don’t know.

Meaning only ideas of what constitutes responsibility or the effectiveness of “sanctions”?

Meaning how will the “must” be enforced, considering there is no way to actually know.

1 Like

I don’t quite understand the problem. Current (very reasonable in my opinion) guidelines allow AI-generated contributions as long as they are reasonable and transparent. Which alternative approach would you (and @qubist) suggest and what could be the difference in enforcement?

In my understanding, with such guidelines, well-meaning people would disclose the use of AI, helping us to (learn how to) distinguish AI contributions. If AI contributions were completely forbidden, we would loose some good contributions. Also, we would have exactly the same problem of enforcement.

2 Likes

The guidance clears up the situation, and that is good, in my opinion, because it will help authors to act responsibly. I see mainly two areas where it will be helpful:

  • For authors, AI is just a tool to create content. As with other tools, it is the author’s responsibility to check carefully what is created before the result is committed. The special problems in this case are that errors can happen, as with other tools, but that these errors may be rather difficult to detect and verify. All the more, it is essential that the author’s review is very thorough and that the author is aware of the kind of errors that AI may generate. Like with an editor, the errors may be syntactic and can be found easily, but AI may create plausible-looking nonsense that can be detected only if the author understands in depth what is written. This may be time-consuming and burdensome, but it is absolutely required.

  • Stating what was created using AI is helpful for the reviewers, telling them to look for AI-typical errors. So I think that the author must tell the reviewer if and where AI was used. Requiring this in the policy is a good thing because it will help in the review process, especially in avoiding unnoticed, easy-to-miss AI errors, so it will help to maintain a good quality of the documentation.

In any case, the responsibility for the quality of the produced material lies with the author, no matter which tools are used.

On the other hand, I see no necessity to show the use of AI in the final documentation. If the authors were working carefully and the reviewer has the chance to check for AI-typical errors, the result should have a high quality, no matter how it was obtained.

One more point should be considered: Meanwhile, the border between AI and non-AI begins to blur. For instance, the grammar checker used when typing text, or a translation service, may use AI or work more conventionally, and often, you have no chance to find this out. And, worse still, manufacturers are beginning to put AI even in tools like editors or office software, which has already produced (not-so) funny effects.

3 Likes

@fsflover

I don’t quite understand the problem. Current (very reasonable in my opinion) guidelines allow AI-generated contributions as long as they are reasonable and transparent.

You repeat reasonable, so let’s reason rather than fantasize about wishful impossibilities.

Is it possible to stop people from using AI? - No.
Is AI dangerous? - See the links above.
Is it possible to discourage danger, rather than provide welcoming “guidance” about it? - Yes.

Which alternative approach would you (and @qubist) suggest and what could be the difference in enforcement?

Not encouraging brain deterioration. Obviously, the central issue is content quality and content that is indistinguishable from human-created one would be accepted, with or without disclosure. In that sense, what you call transparency becomes unnecessary, even meaningless.

The biggest issue I see is the amount of “reasonably good” content that may start to penetrate this wonderful community. The second biggest issue is the extra energy the team would have to waste to review it. Currently, there are thousands of unresolved issues on GitHub, i.e. lots of work waiting to be done. Adding more noise in a system is unlikely the way to keep it more orderly.

Otherwise, at certain point in time, it may turn out necessary that the team needs to offload to AI moderation of content (generated by AI and humans), so AI will decide what is “reasonably good”. Fast forward and let’s remove humans - it’s more efficient and at zero cost. Less CO2 and all that ideology.

In my understanding, without such guidelines, well-meaning people would disclose the use of AI, helping us to (learn how to) distinguish AI contributions. If AI contributions were completely forbidden, we would loose good some contributions. Also, we would have exactly the same problem of enforcement.

Your presumption is that AI contributions are good contributions. I question (the effect of) that.

1 Like

I do not think this is going forward fruitfully.
The team have set out a policy. Let’s see how it is applied, and whether
the effects are good or bad.
I see no prospect of any one in Qubes being tempted to use AI for moderation, of
code or documentation.

Any one who is worried by the prospect of the project being overtaken by
AI slop is encouraged to make a human contribution. As discussed in
another thread, there are many areas in which you can contribute.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

3 Likes
1 Like

I dont think you understand what that is. But you’re right, I am trying
to bring the discussion to a close for the reasons I gave.

Let’s agree to disagree. (Do you see what I did there?)

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

1 Like

I dont think you understand what that is.

Because I am stupid.

1 Like

Correct.

Yes, it can be. It can also be useful.

This is exactly what the current guidance does.

Depending on how exactly you use AI, it can augment your brain, not just replace it. It’s your choice.

It doesn’t become meaningless. By forbidding all AI contributions, you will loose all useful contributions from well-behaving people. Which may be a big loss.

We just need to raise the bar of “reasonably good” to make it a non-problem. Or not do anything, since the bar is already set and the forum works fine as it is?

How does it affect the effort? I don’t understand. AI attacks will occur whether there is guidance or not.

Who is adding noise and how?

No. AI contributions can be good contributions.

1 Like