For several days now, I have been providing the questions posted on the forum to ChatGPT for entertainment and learning purposes, and at a certain point, my brain lit up with the aforementioned question due to the high percentage of effectiveness in solving the given problems. To provide an example, I responded to a topic by simply copying and pasting the solution given by ChatGPT. While it may be considered cheating, I thought it was appropriate to do so in order to demonstrate what I had said. Additionally, English is not my native language, and this translation was also done by ChatGPT. What do you think about the future of these types of tools that have helped us for many years?
Utterly off-topic in General Discussion: Anything related to Qubes which does not fit into the Support category. Moved to Forum Feedback: Discussion about this forum, its organization, how it works, and how we can improve it.
Adjusted the subject line and added staff note, to make clear: any discussion here must be limited to whether ChatGPT and similar tools can be helpful in supporting Qubes OS users / discussions. General discussions about the AI tools should be conducted in places dedicated to that topic (read: NOT HERE).
I vote for an addition to the Code of Conduct banning posts of this sort (AI generated), although I have no idea how to enforce that. Just please don’t do it?
The provided example sounds confident while being utter garbage. Obviously there will be humans doing that too and one needs to always consume content posted here with care, but we hardly need help in generating more confusing content.
You don’t know how to solve the issue, which leads to you not being able to verify the presented solution, which results in you assuming it’s correct when it’s false.
In the land of the blind, the one eyed man will be a king, but the reality of using ChatGPT is often just the blind leading the blind.
well I think the waters have moved with respect to what I said previously. Maybe and surely my hurry my poor English have come together to not clearly convey the idea. They have changed the title and have even almost reached the point of insult. But hey, that’s the reality I wanted to put on the table… My apologies again for not managing to capture things correctly.
@Sven pointed me to this thread.
The answer to your original question
“Can AI tools help answer Qubes OS support questions?” is “No”.
This is the second time (I am aware of), that someone has posted a
ChatGPT guide/answer. What is produced is plausible, but absolutely
useless.
My experience with ChatGPT generally is similar. It is like a book
report produced by some one who has little knowledge of the area, but
has picked up some buzz words. I recognise the words but the way in
which they are put together is senseless.
It is possible to get good results, no doubt - it takes work, and a good
knowledge of the subject area to be able to iterate over the response.
I think that Stack Overflow have it right, in banning use at the moment -
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.
What do you think about the future of these types of tools ?
I have no doubt that such tools will be incredibly important. Of equal
importance will be the ability to use them properly, and to know and
evaluate the results.
We are a long way off that at the moment on the evidence so far, present
hype not with standing.
I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.
In a world where AI can generate good information, posting their answers should be accepted and even encouraged. However, we don’t live in that world–we have large language models, not AI, and when it comes to things that require precision, LLMs hallucinate more than a stoned chimp on LSD.
The forum already has a noise issue, and LLM-posts are likely going to lead to the forum going brain-dead. In other words, this is an existential issue for the forum with consequences for the project, so appropriately drastic measures should be considered.
While there’s no way to positively identify those who are using LLM to reply or write guides, it’s not hard to see who’s making a lot of noise and repeatedly posting misinformation or disinformation. Stronger disincentives for posting verifiably wrong information would also make posters at least double-check their work, and a better reputation system would let people know who to (dis)trust.
I think the golden age of the internet (or at least forums) might be over.
Completely. If one asks chatGPT how to copy between VMs it will tell you to do ctrl-c + ctrl-v, which is wrong. And it also says Qubes is based on xenium (not xen). For me this has no space in the forum and will only contribute with noise.
Isn’t the real problem hijacking users. If I go to chat GPT for the answers to system support questions and it gives bad data back. Some percentage of people probably more likely to blame the “crapy” OS than malfunctioning AI.
No matter how small that percentage is it is damaging to the brand. Rushing over to chat GPT to fix it is only aiding its growth. That’s google all over again.
I tried chatgpt for configuring neomutt the other day and it was pretty funny… lots of fantasy… Same for some advanced nginx for example. I had similar experiences with qubes, although I rarely need to ask there so not much experience.
From what I can tell - the better the documentation for a project, the better the answers of chatgpt. At the end it just scans what it can find and then tries to put it together.
Truth be told, Qubes OS isn’t the best documented project in the world, but there are lots of blogposts that do answer the mullvad vpn question very well…
Long story short, chatgpt isn’'t quite there yet, especially for more advanced topics like qubes.
I go to chat GPT for the answers to system support questions and it gives bad data back. Some percentage of people probably more likely to blame the “crapy” OS than malfunctioning AI.
I’d disagree there - chatgpt clearly states “ChatGPT can make mistakes. Check important info.”
Fun fact. An employee of mine used chatgpt to learn some stuff about fstrim today and in its answer it cited a link to a blogpost of ours about the topic. Thats a start I guess x)