Once they include (RedHat-driven) AI slop, will Fedora still be reliable enough for dom0?
What do you think?
Once they include (RedHat-driven) AI slop, will Fedora still be reliable enough for dom0?
What do you think?
To be honest… after endless threads on the topic of “why Fedora for dom0?”, the final conclusion was that Fedora was someone’s preference, and was never chosen for its suitability. I hope that this is the final straw, though.
Oof?
Fedora was quite the good distro.
Recent packages, stable
While the old discussion about the dom0 distribution was more or less about style and infrastructure/features, I see this most recent change as rather substantial difference.
Proposed permission to use AI for code review seems concerning. The agreed-upon version clarifies that AI must not be the sole or final arbiter, which is good, but vague. I interpret it as people are allowed to use AI for review as long as it does a decent job.
Yes. As long as it does a decent job. The other major concern was copyright concerns. Like how the AI was trained and if the data sources used to train it are compatible from licensing perspective.
Once they include (RedHat-driven) AI slop, will Fedora still be reliable enough for
dom0?
I already questioned how reliable it has been so far, and @barto nailed a perfect summary of that in the current thread.
What do you think?
This subject must attract the attention of Qubes devs.
Can you post a link?
To what?
In my opinion, AI is and can be so dangerous using it on your personal device, offering a lot of amazing features but at the cost of privacy, adding a new tool “perfect tool” to collect massive info about you and everything you do too. " I have seen a lot of these on net, just browse it"
A service like Luma (by Proton) which promised to be just on your device would be welcomed “if it true” other than that I would not use it, at least directly on any of my personal devices.
I’m not worried about Fedora-AI sneaking in. It’s the code quality that I doubt as soon as it’s “AI-driven.”
It’s open source, anyone can submit a PR, and that include people who write poor quality code, and people who submit malicious code.
What is it that makes AI a significant problem, and a problem that can’t be solved by the current peer review process?
I don’t see an issue here at all. When the quality of the whole project suffers sure, but I don’t understand why people make a drama out if this when nothing bad has actually happened yet.
Mirai is right, nothing happened so far, but sometimes though it’s not the last step that leads down the drain, it’s the first. I know this might sound like I’m being overly negative, but here’s what I’m worried about:
[1] Developers might get too used to these tools, which could make them lose some skills. If developers/contributors get used to having AI handle tasks, they might start to struggle with problem-solving.[1]
[2] At some point, some of the software development life cycle tasks are probably going to be fully automated. For example, AI can write documentation or create unit tests on its own. But this could also lead to more displacement and a drop in expertise in critical areas. As certain tasks become automated, the demand for skilled professionals in those areas might decrease.
[3] Once everything’s automated, people will start worrying about quality assurance, accountability, and trust in AI-generated code. If a development team becomes an afterthought and software is treated as disposable (pun intended!), it could lead to a decline in overall software quality and reliability. If engineers keep “optimizing” this, they might drop some basic practices, and that could cause problems we didn’t expect: If you don’t have the right documents and clear processes, you might not be able to keep the software sustainable and maintainable.
Once more: This is a bit of a negative take, but it’s something I’ve come to through my own experiences: “If in doubt, don’t go out”.
The Fedora policy seems pretty even-handed to me, because it avoids incentivizing contributors to hide their use of AI. Not all of which results in slop, BTW. As for the actual slop, the policy gives reviewers another lever to reject it: For a reviewer with a high agreeableness personality trait it’s easier to say “your pull request looks like it used an LLM without declaring this fact” than it is to say “you submitted garbage”.
I just don’t understand why there always are people who think every technological advance is going to be the end of the world.
I doubt the negative side effects that comes from the invention of the transformer/LLMs, is going to be anywhere near the negative side effect that came with the invention of the internet, and the internet didn’t end the world.
As I see it, AI assistance is not going to make people stupid, it just allows you to use methods like the cardboard programmer or rubber duck debugging a lot more efficiently.
I don’t understand why people make a drama out if this when nothing bad has actually happened yet
Because security-minded people understand that prevention of bad things is better than acting post factum.
I’m not team Nostradamus. You misread that. However, technical “advances” can cause harm when they are unclear or unsafe or no advance.
Yet.
(I doubt comparability though.)
Who enforces/controls that? If one has to, there is no efficiency gain, maybe? And even further: How do you measure efficiency in this process?
“Do more with fewer resources in less time with fewer errors”?
Please don’t misunderstand. I admire your optimism. However, I simply doubt that every good thing gets better by “industrialization” in a “corporate” atmosphere. Many prescriptive technologies, when examined in context, are fundamentally anti-quality (and thereby more often than not anti-people). Whether or not that quality is needed (and by whom) is another question. For the security and availability of the services I rely on, however, I’m willing to put in extra time and money. (And I try to avoid enshittified products as much as possible.)
The fact that they allow code review with AI as the sole and final arbiter (they don’t, but it’s not like they can do anything about it apart from banning obvious low-quality AI review slop accounts). I suspect this might end up in some completely unmanned pathways from writing code to getting it merged. The quality of the outcome is for us to observe in the future.
Yes, and humanity doesn’t care. Just as they can’t fight enshittification in general. As long as this works, cutting corners is good. Once it stops working, fix it.
Nothing wrong with losing unused skills. If they actually need problem-solving, it won’t be lost.
Which may or may not be wrong. It’s better to push your advantage than to fix all issues that don’t break the purpose of the act. Reactive (as opposed to proactive) strategy might be objectively better, and I hate it, and I love it.
Beauty is in the eye of the beholder. Most people I know have poor taste. Does it make my taste poor? As in, even if my taste is somehow objectively better, it’s really not because alignment with others in the context of time and change and mortality is more fruitful than absolute quality
Honestly serves them right screw those guys
Do you have a source for this fact?
This is taken directly from the proposal.
AI tools may be used to assist human reviewers by providing analysis and suggestions. However, an AI MUST NOT be the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters).