We have published guidance on using generative AI when contributing to the project.
The Full guidance is here
TLDR - You must disclose use of AI in **any** contribution to Qubes, and you remain responsible for the content.
We have published guidance on using generative AI when contributing to the project.
The Full guidance is here
TLDR - You must disclose use of AI in **any** contribution to Qubes, and you remain responsible for the content.
This is opening a huge door. How will you guys handle a potential flood of information? Doesn’t that endanger the project?
Perhaps clarify what responsibility means (sanctions?).
It is also unclear how someone is an author of content generated by a machine scraping other authors’ content and combining it.
Disclaimer: I have never used AI for anything that might be considered a contribution and I am not planning to.
Note that:
This policy applies to every way in which you may contribute to, or interact with, the project, including […] discussion forums, […]
There have been some AI-generated forum posts, as you may have noticed.
This is the sort of challenge that would be welcome.
I think we do this in the announcement. Contributors are responsible for the
content they produce, however that is produced. If they use GenAI they
must disclose this, and there is an added burden on them to confirm and
validate what the GenAI produces.
The announcement does set out sanctions, which include an outright ban
on contributing to the project. I think this would only apply in the
most egregious cases.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
Fully agree with this approach. Will that disclosure be public (for me to see where gen AI is used in Qubes) or just kept internal to the qubes devs ?
This is the sort of challenge that would be welcome.
Why would danger be welcome?
If they use GenAI they must disclose this
How will you know if anyone used AI and did not disclose it?
Why would danger be welcome?
Some people live for danger.
If you tried to understand what I wrote it would be clear that it is
the challenge of handling “a flood of information”.
How will you know if anyone used AI and did not disclose it?
If it is not clear from the style and content, then there is no issue is
there?
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
Such a written policy could hurt the reputation of QubesOS as a security-first project. How aware is the team of this?
To be specific, the policy is
Those points are pretty controversial based on my experience. The tech and tech-adjacent communities I’m a part of resound with AI skepticism, so I see a lot of the controversy. AI is relentlessly blamed for all of the recent security and stability issues of Windows 11.
Such a written policy could hurt the reputation of QubesOS as a security-first project. How aware is the team of this?
I doubt that this is true. The review process will be unchanged. If you
trust the team now, you should trust them in the future.
To be specific, the policy is
- implying AI can generate high-quality contributions
- not outright discouraging the use of AI for generating code
It is possible for people to generate high-quality contributions
using generative AI. But, as the policy explicitly says, they are
often of low quality.
You are right. The policy does not outright discourage such use: “If
these contributions are of high quality, they will be welcome.”
I have no opinion on Windows 11. I share your skepticism on use of generative AI.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
Some people live for danger.
If you tried to understand what I wrote it would be clear that it is
the challenge of handling a flood of information.
I confirm I do not understand missing answers in any discussion.
I did not ask if anyone bravely rode from Camelot. I asked HOW a potential flood of information would be handled. That was not answered.
If it is not clear from the style and content, then there is no issue is
there?
The published rules say one “must” disclose AI usage. I am asking how will you know. AI-generated content can surpass the quality of human-generated one and can be indistinguishable from it to the point that even incorrect information may appear correct.
Consider also:
I did not ask if anyone bravely rode from Camelot. I asked HOW a potential flood of information would be handled. That was not answered.
I will think of you as Brave Sir Robin hereafter.
An actual flood of contributions will be handled as contributions are
at the moment: reviewed, tested, and used if appropriate.
The published rules say one “must” disclose AI usage. I am asking how will you know. AI-generated content can surpass the quality of human-generated one and can be indistinguishable from it to the point that even incorrect information may appear correct.
Yes, we expect contributors to disclose use of AI. If they do not we
will not know, except by the content. I have rarely (never?) seen AI
generated content that “surpasses the quality of human-generated one”. If
the information is incorrect, the contributor will be judged accordingly.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
I will think of you as Brave Sir Robin hereafter.
High quality content for the community.
I am lacking one crucial bit of information here: The reason for introducing this policy. I am 98% sure that using at least a bit of GenAI has become a solid industry standard. In other, oversimplified words: Everybody uses it, be it for good or bad reasons.
Please specify the rationale for introducing this policy, i. e., how will contributions with the GenAI flag be treated differently, if at all?
Not using Qubes OS is also a solid industry standard ![]()
Personally, here on the forum, I am sometimes confronted with posts with a bit (if not a lot) of nonsense. If it looks like a confused user, I’m willing to spend some time trying to help them. But when it comes from AI, I just move to something more important.
(Another case might be @qubist and @unman wearing knight suits, coconuts and debating, and so I let them have fun.)
I think there is a need to define how to deal with such content, and the project decided to accept as long as content is good and reviewers are fine with it, and that contributors mention the use of AI.
This avoids a grey area about the review process. Before this, we were not sure it would be okay or not to accept such contributions.
I am lacking one crucial bit of information here: The reason for introducing this policy. I am 98% sure that using at least a bit of GenAI has become a solid industry standard. In other, oversimplified words: Everybody uses it, be it for good or bad reasons.
You have provided the reason yourself - too many (fortunately, not everybody) use the brain rotting tool, so it must be clear how it will be dealt with, as this penetrates society in so many ways and its going to get worse.
Ni! ![]()
I take on this discussion in more depth not out of personal resentment. I do so because I specifically care about the Qubes OS project.
If AI is a bubble, it will merely shrink and not burst. GenAI for software development is here to stay.
It’s almost a natural matter that you take full responsibility for all code you publish. I write “almost” here because, well, keyword being the Chinese Room.
No matter how you think about the cash burning going on at Anthropic and OpenAI, we will always have open-source models. They deliver good performance and can be run on rented hardware. It will only get cheaper.
In short and in repetition: I am curious as to why this policy was introduced since it is an introduction of additional reporting burdens on contributors.
I believe it is for building trust via transparency within the community.
That is important. Also there were questions from contributors about
what the position was on using generative AI, so we thought it best to
set that out for all to see.
The “additional reporting burden” is minimal.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.