It seems like you’re not using the i3.sh script provided above, is that correct? That invokes i3 as user so you don’t end up running your window manager as root.
@fpoqu Yes! I skipped that section because I wasn’t using i3 and my audio worked fine.
Perhaps it would help others to put it under a heading like “run window manager as user” or something to indicate it is not i3 or audio specific?
Or perhaps it would be better to go with the most traditional approach of using logind and running the entire X server unprivileged?
In any case, thanks a lot for the guide! It was very helpful.
I agree that it would be a good idea to point this out.
Unfortunately, I did not have any success trying to start the X server as an unprivileged user in my Qube. Doesn’t that generally only work from a TTY? Were you able to do it?
@fpoqu If you have sudo privileges, it absolutely can work not from a TTY. I’m just not sure how to use those sudo privileges to open a display and then drop them before starting the X server.
Perhaps something like the following:
#!/bin/bash
binary=${@:?binary required}
# X requires a relative path for the Xorg config file
cd /
xorgConf="opt/separate-head-xorg.conf"
# Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file
lspci | grep "VGA" | grep -E "NVIDIA" && sed -i 's/^Driver .*/Driver "nvidia"/g' "${xorgConf}"
lspci | grep "VGA" | grep -E "AMD/ATI" && sed -i 's/^Driver .*/Driver "amdgpu"/g' "${xorgConf}"
pci=$(lspci | grep "VGA" | grep -E "NVIDIA|AMD/ATI" | cut -d " " -f 1 | cut -d ":" -f 2 | cut -d "." -f 1 | cut -d "0" -f 2)
sed -i 's/"PCI:[^"]*"/"PCI:0:'$pci':0"/g' "${xorgConf}"
# Kill any existing X server on display :1
pkill -f "X :1" || true
# Create proper Xauthority file
XAUTH_FILE=$(mktemp -p /tmp serverauth.XXXXXXXXXX)
xauth -f $XAUTH_FILE add :1 MIT-MAGIC-COOKIE-1 $(openssl rand -hex 16)
# Start the Xorg server for the X screen number 1 with proper authentication
X :1 -auth $XAUTH_FILE -config "${xorgConf}" &
sleep 2
DISPLAY=:1 XAUTHORITY=$XAUTH_FILE $binary
@fpoqu I think a display manager solves this exact issue. I’m guessing the display manager runs as root and enables starting X servers as user on that display. I haven’t played around with this much yet as this whole HVM is untrusted and I’m not too concerned with a root privilege escalation.
@zaz Thanks, I’ll look into this. Although I’m also not really too concerned about X running as root in my gpu-Qubes :-).
I see that everyone here seems to have managed to play games normally with GPU in Qubes OS. Please tell me how you dealt with low fps due to the overhead of transferring images for rendering in dom0? I will describe the situation in simple terms. I installed the Nier automata game for testing. If run it in a window, the fps is excellent, but the larger the window size (and especially fullscreen), the lower the fps becomes. I checked, it depends solely on the size of the window with the game (or any other content that is actively updated, like a video on YouTube) and in a normal window size it is not possible to use gaming hvm qube. Is there any way to overcome this other than using a second monitor? I tried to set up a local vnc and connect from dom0 to a vnc server on DomU, this significantly increased the fps, but led to frame losses. It became more comfortable to use than the usual approach with vchan, but it is still not suitable for normal work. Of the ideas now, only Moonlight (not applicable since Qubes OS seems to strictly block udp traffic between cubes) and xpra (at the moment the most promising, but it is a terrible crutch, and there are problems with access to hvm qube (pvh not affected)). And insecure direct connection DomU->Dom0
P.s nvidia-smi,PyTorch, cuda and other “direct GPU” working very well and game can see GPU and use it. But frames domU->Dom0 for rendering is buggy
Is it expected that Qrexec connections launched from inside the Qube don’t work anymore?
I can copy files to the Qube but not from the Qube to another
it should be a qube like any other, everything is expected to work
Any idea what could cause this issue? (Maybe I installed something that conflicts?)
I already checked the permission daemon in dom0 and it does not seem be an issue there. (maybe I am blind)
I get a “Data vchan connection failed” with EOFt from qrexec-agent-data.c:345:handle_data_client
I am using Fedora tho and not Debian
No idea, could a lot of different things. Not tested with fedora nor debian for many years
Okay after looking though my update log I suspect that the group @xfce-desktop was the problem. Would really appreciate if somebody could tell me if there any likely packages. Otherwise I will have to do a reinstall and be careful this time.
How are you creating your template_gpu ?
Honestly I don’t exactly remember, soon its been 2 years since I created that standalone. Upgraded 3 Fedora version and besides that qrexec problem (I had since the beginning) no issues at all. ( I can’t access TTY but not sure if that’s supposed to be possible)
I created the xorg.conf like this. ( Did not remove any sections)
sudo X :1 -configure
sudo mv /root/xorg.conf.new /etc/X11/xorg.conf.d/99-xorg.conf
I replaced the drives with amdgpu and then I am starting a display manager like this via rc.local
pci=$(lspci | grep "VGA" | grep -E "NVIDIA|AMD/ATI" | cut -d " " -f 1 | cut -d ":" -f 2 | cut -d "." -f 1 | cut -d "0" -f 2)
sed -i 's/"PCI:[^"]*"/"PCI:0:'$pci':0"/g' /etc/X11/xorg.conf.d/99-xorg.conf
sudo systemctl start lightdm.service
I tried to replicate this setup today with a split for template/appvm but to no success.
For others struggling with NVIDIA GPU passthrough FPS performance, this is what solved it for me:
I added to my grub config (of the HVM) the following:
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX rd.driver.blacklist=nouveau nvidia-drm.modeset=0"
Now my Factorio FPS went from ~10/12 to the standard 60.
This was inspired by Enabling kernel modesetting (KMS) for nvidia drivers in GPU PCI-passthrough HVM causes horrible performance · Issue #10042 · QubesOS/qubes-issues · GitHub
Distro I tested this with is Fedora 42 XFCE so your mileage may vary. Just adding this in case others find it useful.
Which file does this line go into?
/etc/default/grub of the HVM
I mostly followed this guide to create the HVM in the first place: Salt: automating NVIDIA GPU passthrough
I figured out why connections with qrexec failed… It’s because it only works with the root and “user” user. I guess both userids need to exist on the two connecting systems?
This seems so obvious but somehow I did not even consider it
How to tell a windows qube that it should be rendered on a dedicated display and not inside the screen that is used with all other qubes?
As far as i understand the xorg configuration that is mentioned in the guide can be only done on linux vms.