Another 2. GPU passthrough post

I might be experiencing the same problem as you are, I also have two NVIDIA cards one for dom0 and the other one for the HVM, I might add that the display blackouts very often (lasting for a couple of seconds), when checking for logs in the Windows 10 event viewer I found the following message correlating to the time of each blackout:

Display driver nvlddmkm stopped responding and has successfully recovered.

Also the Windows 10 HVM is very laggy, I’m only facing this on Windows

Yes, I have this as well, once or twice per minute I get “blackouts”, which together with the lag makes the vm unusable (couldn’t even get to install a game and check fps)

Have you figured it out? I still haven’t found a solution, so far I’ve tried downgrading the drivers, installing different versions of Windows such as 7 and 11, and even enabling UEFI, I also tried enabling permissive mode as mentioned in the PCI troubleshooting guide the result was the same

Hi I followed all the steps but performance in the HVM is not so good. It seems like an issue with the disk speed but I am using an NVME. I am using an 11th gen i5 and a RTX 3050. Please let me know if there is a fix.

Thanks

-Ryan

Installing QWT has helped ssd performance but the mouse doesn’t line up and there is screen tearing.

Hi All, is this tread still active? Hope I’m posting in the right spot?

I installed Qubes 4.0.4 in the fall of 2021, and was able to successfully passthrough my Nvidia GTX 1050 to a Win10 HVM with well >3.75GB memory allocated by following neowutran’s great work (Contents/windows-gaming-hvm.md at master · Qubes-Community/Contents · GitHub ARK: source= Contents/windows-gaming-hvm.md at master · Qubes-Community/Contents · GitHub)

The Win10 HVM worked perfectly fine with great performance outputting to a second monitor, other than periodically, after any dom0 updates, booting the HVM resulted a system freeze and ensuing reboot…but I found this was easily resolve by re-executing “grub2-mkconfig -o /boot/grub2/grub.cfg” whenever dom0 updates…

THEN, several weeks ago, I upgraded RAM on my Lenovo D30 ThinkStation (uses DDR3 PC3-12800E type). When doing this, I noticed that the ram slots were not populated correctly for best performance per the manufacturer (ie, ram chips were populated all on one side of each CPU in an order like DIMM slot 6, 2, 5, 1…instead of 1, 2, 3, 4, etc). So I install all the chips, including the additions in the correct fashion per the manufacturer’s hardware manual…

Since then, I can not get the Win10 HVM to work with GPU passthrough…NOR will it work with greater than 3.6GB of ram assigned (even when I don’t pass the GPU). The HVM works fine without any windows errors if I boot it with less than 3.75GB ram and no GPU passthrough.

It is worth noting, I’m also running this HVM off a secondary storage I added to Qubes by following original documentation (Secondary storage | Qubes OS)…not sure if this matters?

So…IN SUMMARY:

  1. Boot win HVM with <=3.6GB RAM and NO gpu passthrough = works fine

  2. Boot win HVM with >3.6GB ram, it hangs in terminal window after Machine UUID… for 5mins, then reports error: “Domain WIN10 has failed to start: Cannot connect to qrexec agent for 300 seconds, see /var/log/xen/console/guest-WIN10.log for details”

  3. Boot win HVM with <=3.6GB RAM +GPU passthrough (with strict no reset per neowutrans’s instruction link above), system freezes and reboots

Log’s attached

Thank you for your help!
guest-WIN10.log (34.6 KB)
guest-WIN10-dm.log (146.7 KB)
guid.WIN10.log (1.4 MB)
qrexec.WIN10.log (60 Bytes)

2 Likes

Where you able to solve this?

Nothing yet…
Thinking to backup all my appVM’s and either upgrading to qubes 4.1 or trying fresh install of 4.0 to see if it resolves the issue…

Can’t tell if adding RAM is the issue or, if running the HVM on a secondary storage added to qubes is the cause…

If you don’t use the secondary storage for other VMs than Windows, you could try to pass the entire disk to Windows, if it’s a NVMe you passthrough it like any other PCIe device.

hmm…its not NVMe…but when I do a fresh install, plan on using an SSD (eSATA) as the 5400rpm disk using now is super slow…

It was working, you moved/added ram stick, and then it stopped working ?
If you remove the ram stick you added/moved, does it work again ?
You haven’t moved the GPU to another PCI slot during the process ?
And, >3.6GB + GPU passthrough ?

Hey neowutran!
Basically yes, was working perfect (thanks for ur original guide btw!)

  1. After moved/added RAM sticks stopped working. Moved sticks back to where they were did not resolve problem (Note, can’t be sure exactly which specific RAM sticks were in exactly which specific DIMM slots…idk if that matters?..I have a mix of 8gb and 4gb sticks in pairs)

  2. GPU using for pass-through is, and has always been, in PCI slot 1 (closest to CPU), BIOS is set to boot off of PCI slot 2 (my weaker GPU that Qubes/dom0 uses)

  3. Not sure what u mean >3.6GB + GPU passthrough…I’ve run through the steps in ur guide exactly, and checked/repeated them several times. The “So…In SUMMARY” list on my original post indicates the behavior…GPU passthrough under any condition causes sudden system reboot, and w/out GPU passthrough involved Win HVM hangs/fails on qrexec agent if I set >3.6GB ram.

NOTEWORTHY…when I plug in the secondary monitor into GPU 1 (for passthrought)…it does show up in the display settings in dom0…and I am able to use it in dom0…so perhaps hiding it from dom0 is not working?

It usually matters, but it seems you’ve read the Mobo manual. Usually with “dual channel” they go like A1 B1 A2 B2, with A and B being the same sticks. So in your case, alternating 8G 4G 8G 4G.
But recheck your Mobo manual to be sure.

For point 3, make sure you use the “hide” thing in kernel command line.

I had the same problem, for me it happened only when I left the GPU unbound to no VM. So always attach the GPU to a domain.
I also found a workaround that works for me, to do between VM reboot cycles. I always shut down the VM (never reboot), and between OFF and ON I do this :

# remove GPU
echo "1" | tee -a /sys/bus/pci/devices/0000\:xx\:xx.0/remove
# remove GPU_audio
echo "1" | tee -a /sys/bus/pci/devices/0000\:xx\:xx.1/remove
# rescan pci bus to reconnect devices
echo "1" | tee -a /sys/bus/pci/rescan

PS: in any VM config file, you can force to “shutdown on reboot” (vanilla Xen: on_reboot="destroy"), but from what I know is not really straightforward in Qubes. The advantage is that even if you forget to use shutdown and use reboot instead, the system will anyway force a destroy/shutdown, letting you run the above commands

Hi,
I have a slightly different question regarding this post.
Well, I would like to occasionally play a simple game that any integrated graphics will handle. The only requirement is openGL support.
Is it possible to hook up integrated graphics to Windows HVM?
I’ve seen the answer “no” on github, but it doesn’t quite satisfy me. Because is it really not possible to somehow share graphics or just switch “button” integrated graphics support from Dom0 to Windows_HVM and vice versa?