Qubes 4.1 install black screens

Hi, there! New to the forum. :slightly_smiling_face:
I experienced something on Qubes 4.1 install.

Ryzen 7 3700X, X570 Gaming X, 32GB RAM, 1TB 980 Evo (for Windows Qubes), 500GB 970 Evo (Qubes install), as CPU is without onboard GPU, I have 2 dedicated GPU. AMD RX550 for Qubes, and Nvidia GTX970 for GPU passthrough. (Still working on it.)

What happened is, that when I had connected one Monitor to RX550 on Slot 1, there were several black screen situations while attempting to install Qubes.

When I connected 2 additional Monitors on secondary GTX the same time, the main screen showed still black screen, but on one of the other screens Qubes was silently waiting for me to enter the passphrase…

I don’t know how many installation attempts were unnecessary, because Qubes showed up on a different video output…

Did I do something weird? Has someone experienced smth similar? Bug?

Any thoughts on this are very welcome!


I have similar issue with Qubes OS 4.1.0 ISO from Download Qubes OS | Qubes OS. When I try to install from it I’m getting this error:

X startup failed, aborting installation

I have 2 displays:

  • first is connected to integrated GPU Intel UHD Graphics 770.
  • second is connected to Nvidia GeForce GT 1030

I have my integrated GPU as main source set in my motherboard BIOS.
Qubes installation GRUB menu is displayed on integrated GPU but after I select Installation in GRUB then all output redirects on Nvidia and just black screen on integrated GPU. And in the end I’m getting the error above.

But with latest Qubes 4.1 ISO build with kernel-latest from Index of /qubes/iso/ I don’t get this error and installation process goes on integrated GPU and everything works fine.

Also you can try to hide the GPU that you’re going to passthrough and it may stop Qubes from using this GPU output as described here:

1 Like

Thanks for your answer @tzwcfq !
I’m quite familiar with the suggested links… :slight_smile:

I hid the secondary GPU, which is actually on MB slot 1 for better perfomance for windows. Like this, video output is routed to Radeon like it should.

But when I try to attach the 2. GPU to the Windows Qube, it says:
qvm-pci: error: backend vm ‘dom0’ doesn’t expose device ‘0a:00.0’
xl pci-assignable-list shows the device as assignable, though.
I rebootet to reset the configuration, but no luck. I tried a whole lot, to get this to work, because this should become my main workstation.

What I tried:

with and without rd.qubes.hide_pci=
with and without xen-pciback.hide=()
with and without xen-pciback.passthrough=1
with and without xen-pciback.permissive

and a lot combinations.

attached with and without no-strict-reset option

RAM Patch applied: allocated 12288 GB for Win Qube

Kernelopts: applied 16MB instead of 8 ( DMA Cache? )

Windows Qube boots now, but only without GPU attached.
With attached GPU black screen and shutdown.

If someone could direct me to related logfiles? I’ll dig the forum later as well…

Thanks a lot!

I did passthrough my Nvidia GeForce GT 1030 to Windows 10 HVM with QWT without any problems using the guide above.
I’m on Qubes 4.1 current-testing.

  1. My GPU consist of two devices: dom0:01_00.0 and dom0:01_00.1

  2. I’ve edited /etc/default/grub and add the PCI hiding:

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=01:00.0,01:00.1 "

then regenerate the grub

grub2-mkconfig -o /boot/grub2/grub.cfg
  1. I’ve downgraded xen-hvm-stubdom-linux and xen-hvm-stubdom-linux-full packages because of this problem:
sudo qubes-dom0-update xen-hvm-stubdom-linux-1.2.3-1.fc32  xen-hvm-stubdom-linux-full-1.2.3-1.fc32
sudo dnf downgrade xen-hvm-stubdom-linux-1.2.3-1.fc32  xen-hvm-stubdom-linux-full-1.2.3-1.fc32

Then patched /usr/libexec/xen/boot/qemu-stubdom-linux-rootfs and /usr/libexec/xen/boot/qemu-stubdom-linux-full-rootfs as stated in he guide.
Also note, that your VMs name should be starting with “gpu_” for this patch to work.

  1. Then attached GPU devices to my HVM:
qvm-pci attach gpu_windows-hvm dom0:01_00.0 -o permissive=True -o no-strict-reset=True
qvm-pci attach gpu_windows-hvm dom0:01_00.1 -o permissive=True -o no-strict-reset=True
1 Like

Thanks for the downgrade link (3.) @tzwcfq
I’ll try later and report back.

I don’t use QWT and I kinda don’t want to, as its marked as optional. But I’ll try it with QWT, too.

In one log file it said: (Windows) System not capable of PCI (or smth like that) … Some Ideas about that?


Didn’t come across this error. Can you paste the exact text?
Maybe it’s in dom0 libvirt log:

Hi, the log was from:
dom0 /var/log/console/guest-gpu_windows-dm.log

Seems not quite to be an Error, though:

[2022-05-05 17:44:19] Performance Events: PMU not available due to virtualizati>
[2022-05-05 17:44:19] devtmpfs: initialized
[2022-05-05 17:44:19] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffff>
[2022-05-05 17:44:19] futex hash table entries: 16 (order: -4, 384 bytes, linea>
[2022-05-05 17:44:19] NET: Registered protocol family 16
[2022-05-05 17:44:19] xen:grant_table: Grant tables using version 1 layout
[2022-05-05 17:44:19] Grant table initialized
[2022-05-05 17:44:19] PCI: setting up Xen PCI frontend stub
[2022-05-05 17:44:19] xen:balloon: Initialising balloon driver

[2022-05-05 17:44:19] PCI: System does not support PCI

[2022-05-05 17:44:19] clocksource: Switched to clocksource xen
[2022-05-05 17:44:19] NET: Registered protocol family 2

I have the same message for my system in that log ( dom0 /var/log/xen/console/guest-gpu_windows-dm.log ) but GPU passthrough works fine.

Just noticed, the you should replace ‘:’ to ‘_’:

qvm-pci attach gpu_windows dom0:0a_00.0 -o permissive=True -o no-strict-reset=True

That fixed:
qvm-pci: error: backend vm ‘dom0’ doesn’t expose device ‘0a:00.0’

Thanks! But both attach methods, console and gui resulting in black screen and shutdown… wait let me try smth

Try to attach some other PCI device - USB controller or something to check if it’s a problem with your GPU or with PCI passthrough generally.

Will do

qubes-dom0-update kernel-latest
Build of Windows Qube with 12 GB RAM without 3.5GB Patch! was possible.

Sorry for the delay @tzwcfq and thanks again for helping out!
I have successfully attached USB PCI Bridge to Windows HVM.

Trying with Nvidia Card.
/var/log/libvirt/libxl/libxl-driver.log shows the following:

2022-05-07 18:24:33.982+0000: libxl: libxl_pci.c:1484:libxl__device_pci_reset: write to /sys/bus/pci/devices/0000:0a:00.0/reset returned -1: Inappropriate ioctl for device
2022-05-07 18:24:33.987+0000: libxl: libxl_pci.c:1489:libxl__device_pci_reset: The kernel doesn’t support reset from sysfs for PCI device 0000:0a:00.1
2022-05-07 18:25:06.346+0000: libxl: libxl_pci.c:1484:libxl__device_pci_reset: write to /sys/bus/pci/devices/0000:0a:00.0/reset returned -1: Inappropriate ioctl for device
2022-05-07 18:25:06.463+0000: libxl: libxl_pci.c:1489:libxl__device_pci_reset: The kernel doesn’t support reset from sysfs for PCI device 0000:0a:00.1

I tried with and without “no-strict-reset=True”.

If you’re on current-testing then you need to downgrade package and apply patch from the guide above.
Check this issue: