Problems with setting up sys-gui-gpu

I have a Novacustom V56 without a Nvidia GPU, so only iGPU.

I ran the following commands as described in the docs to create sys-gui-gpu:

sudo qubesctl top.enable qvm.sys-gui-gpu
sudo qubesctl top.enable qvm.sys-gui-gpu pillar=True
sudo qubesctl --all state.highstate
sudo qubesctl top.disable qvm.sys-gui-gpu
sudo qubesctl state.sls qvm.sys-gui-gpu-attach-gpu

After all states ran successfully, I checked if default_guivm was set to sys-gui-gpu and it wasn’t. I thought that should have been done automatically, however I manually changed it to sys-gui-gpu then.

I restarted my laptop and tried to boot into sys-gui(-gpu) by selecting it in the login. The only thing that happened was that there was a short black screen, then i came back to the login screen. I tried it multiple times.

Then I logged in “normally” again without sys-gui and I realized that sys-gui-gpu didn’t even get started, so I activated autostart on boot.
Then I rebooted again and after Luks i just gut a black screen, i couldn’t even access a terminal.

I had to change the kernel params in order to be able to boot again (this was a bit of a mess because Heads doesn’t make this too easy :slight_smile: )

Any ideas how i could debug this?

1 Like

Could somebody answer me if

  1. autostart should be by default enabled for sys-gui-gpu or not after creating it with saltstack
  2. Should default_guivm be set to sys-gui-gpu by default or not after creating it with saltstack?

I do not know, because I was never so courageous as to try sys-gui-gpu, but I think I would try to do it as listed below.

There are many questions, and few answers, so this post is probably not useful!

  1. No autostart, especially not the new guivm.
  2. Find a way to access Dom0 console with no console on the main graphics card which must be hidden from Dom0 for passthrough. I don’t know how to give Dom0 a different console, except with another GPU - maybe usb, serial, or network console? Maybe it is possible to have full Dom0 gui with sys-gui-vnc over ssh? I think that everything will be easier if Dom0 has a working and accessible x-server.
  3. Make GPU passthrough work to a “regular” hvm qube. My experience of this was not so good, but it was a low quality AMD motherboard and GPUs. Having some access to Dom0 was very helpful… it allows to tell if the whole machine is dead, or only graphics.
  4. Try same settings for sys-gui-gpu, but with Dom0 still available.
  5. Only switch Dom0 guivm over when everything is working. I do not know what methods are available to do this. Can it be done on the kernel command line in grub? Can it only be done at boot-time? Maybe it is even better to continue to keep Dom0 on a VNC GUIvm, or to give it another X server(how?).

My logic is that “regular” sys-gui was much easier to set up when it was possible to ctrl-alt-f2 to contact Dom0. Like that, the gui-vm is standalone - not a true system gui, but hardware and software debugging is easier.

It is very probable that my logic, and my steps, are wrong… I hope for enlightenment from the wise.

Hm. I just tried setting up only sys-gui.
In contrast to sys-gui-gpu autostart is automatically activated on sys-gui. However, I had to run qubesctl default_guivm sys-gui again manually - I wonder if this is a bug that its not done automatically as stated in the docs.

After logging in using sys-gui i saw nothing; only my background. I saw no Qubes Manager, no panel on the head of the screen, when i tried to select something with my mouse in the background i was not able to select anything. I also was not able to logout or something.

I had to reboot to switch into dom0 guivm again.

You are nearly where I got to. I think the guivm services were enabled in sys-gui by the salt formulae.

I believe that I used the ctrl-alt-f2 to log in to Dom0, and then used qvm-run sys-gui – xterm’ to test,
Then I must have started an fvwm-session or panel from xterm.
The graphical Qubes manager showed only guivm and Dom0…
After that, I created a new template and qube in Dom0, and set their guivm to sys-gui. They are partly controllable from Qube Manager in the sys-gui interface.

For my application - a walled garden - I believe the my steps would probably be

  • work out how to use policy to be able to fully administer only qubes visible inside the sys-gui. Maybe it depends on the “created-by-” tag on each qube. All mine are “Created-by-Dom0”
    • maybe this requires an admin vm
  • fix networking and updates for the guivm & children.
  • a way to access Dom0
  • everything else. There is a lot which I do not understand!

is this something that is at a stable level or is this still something people are trying out without everything full ironed out? What is the advantage of this over something like attaching a GPU? Can this be done with only 1 GPU? Does this isolate GPU processes so that if there’s a rootkit in the GPU itself it can’t be exploited?

TLDR: fairly experimental, and I think we get to keep all the broken pieces.
However, many or all of the components are available, and there is already some Salt magic in 4.2.4, but putting it all together is not yet easy. The unstable Qubes 4.3 is probably a better place to try all this.

Long, rambling version of my level of understanding follows…

It is about setting up a guivm, which is part of a longer term project, to enable better isolation of different activities, especially those of Dom0.

There is an early description here: Introducing the Qubes Admin API | Qubes OS

The specific point of GuiVM is to remove a maximum amount of graphics code from Dom0 - ideally all of it with sys-gui-gpu. However, moving the GPU out of Dom0 meets all the difficulties of passthrough for any other reason. GuiVM is not really for full isolation of an untrusted GPU, because we wouldn’t run our main user interface on it in that case, but it does move the GPU and the UI code somewhere they have less power - outside of Dom0.

Even with plain sys-gui, all of the window manager is moved out, and only a low level graphical interface runs in Dom0 - it should be robust and easy to secure… hopefully!

My experiments have been trying to work towards multiple guivms, so that the yellow and the blue zones in the first schema have one each, with their own management of their qubes, and maybe their own hardware components for some applications. That requires an alternative interface for Dom0, which mostly would sit and give or deny access. Right now I have a partly working pile of pieces, but a working multi-management-vm system is feeling “not so far away”.

I think @monoxide059 is trying for something more standard, closer to that schema, but they can tell you themselves…