Always check your keys. Be careful of GPTs

You’ll have to follow exactly the steps described in the Qubes documentation - without using any AI functions, they`re too unreliable for such a task - and check that you get exactly the results described there. (To be sure, I just executed these steps, and they work for my gpg implementation on a Windows system.)

Don’t trust anything that AI tells you - it may be simply nonsense! That’s a natural property of AI systems: Like human idiots, they may hallucinate or simply lie to you if they have been trained with false or biased data or if they don’t find the real solution but still want to tell you something. The consequence is that any output of AI systems has to be carefully checked by an intelligent human being. But in many cases, that may make the use of AI uneconomic or, at least, reduce its economic savings, and so I personally expect the current hype to die down eventually.

4 Likes

Hi, yes AI is ok for general simple questioning, but has been somewhat unreliable with other tasks
It has helped me with installation, but indeed check the output with the official documentation (those keys where just one of some mistakes i’ve seen AI output on this subject)

2 Likes

I’d like to push back on that a bit. It’s well known that LLMs can hallucinate from time to time, and I don’t dispute that.

However, if I apply the Qubes principle of “never trust the infrastructure” to the use of LLMs and their output, then the responsibility is on me as the user to verify whether a given answer might be a hallucination.

From my own experience, both ChatGPT and Grok have been extremely helpful in installing, configuring, and running Qubes in day-to-day use—especially after I developed my personal threat model with their assistance.

For that reason, I think a blanket condemnation of LLMs is overly simplistic and misses important nuance.

1 Like