Devices greyed out - qvm-pci gives "failed"

Hello,
on a fresh install on Qubes OS, I have the problem that under “Settings” for a VM, the tab “Devices” is greyed out - I cannot select it.

Similar, If I type on dom0

[user@dom0]$ sudo qvm-pci
Failed to list 'pci' devices, this device type either does not exist or you do not have access to it

Nevertheless, I can run

sudo lspci

and I get a list of PCI devices.

For

[user@dom0]$ sudo qvm-usb

I get a list of all the USB devices, this works.

I suspect this is part of a larger problem, namely to make a thunderbolt hub work with Qubes OS - yet I wanted to keep this a separate thread as it may not be the same.

Appreciate any thought,
Raspyvotan

1 Like

Same problem here after clean install of 4.1.
Did you manage to solve this problem?

Dear zithro,
I saw your answers to similar problems here and here

Tried to run exactly what you propose:

sudo xl pci-assignable-list

is empty. Therefore, I try:

sudo xl pci-assignable-add 0000:00:1f.3

which works, as xl pci-assignable-list reports back this device.

Then I try to assign:

sudo xl pci-attach 8 0000:00:1f.3

but this gives me an error message that I do not understand:

libxl: error: libxl_pci.c:1489:libxl__device_pci_reset: The kernel doesn't support reset from sysfs for PCI device 0000:00:1f.3
libxl: error: libxl_pci.c:1444:pci_add_dm_done: Domain 8:xc_assign_device failed: Operation not supported
libxl: error: libxl_pci.c:1774:device_pci_add_done: Domain 8:libxl__device_pci_add  failed for PCI device 0:0:1f.3 (rc -3)
libxl: error: libxl_device.c:1450:device_addrm_aocomplete: unable to add device

I guess the first line tells the kernel cannot reset, but how can I fix this now?
Appreciate your help,
Raspyvotan

There could be a lot of reasons.
First

Afaik, PCI passthrough is only available for HVM and PV. Qubes uses PVH as default, which disallows PTing devices.
Also, enmus had success using the Xen default way of passing devices to pciback via grub kernel command line, instead of the Qubes way. But I dunno why it made a difference.
Check what kernel driver/module is used by the device you wanna passthrough with lspci -k.

You can also try xl info | grep caps to check if directio is supported on your host.

Last thing, I see you’re trying to passthrough a device identifed by 00:1f.3. What is the device, and what are the .1 and .2 devices ?
I’m not entirely sure on that, read the Xen wiki about it, but IIRC you have to pass all 00:1f.x devices at once. EDIT: this is true ONLY for multi-function devices.
As an example, you cannot PT a GPU without PTing also the audio part.

1 Like

… because of the possible iommu_groups sharing problem, I guess?

Also, I haven’t found all the answers too

It can be a problem too, but I think recent kernels have overridden the problem (previously people had to use what’s called ACS patch). Don’t quote me on that though, it needs verification.

What I meant is the PCI “BDF notation” means Bus-Device-Function, and PCI passthrough can only PT a Device. From Xen wiki :
This error usually happens when you're trying to passthru only a single function from a multi-function device (for example a dual-port nic)
That’s why I used the GPU video+audio analogy, you cannot pass one w/o the other.
(PS: I’ve edited my previous post, I wrote something wrong).

Solution from the mailing list (qubes-issue):

I de-activated Intel Volume Management Engine in the BIOS, this solved the problem.

Some more background:
With Intel Volume Management Engine turned on, I had some PCI devices that seem not to match Regex expectations of Qubes OS, they looked similar to:

10000:e0:06.0 PCI bridge: Intel Corporation 11th Gen Core Processor PCIe Controller (rev 01)
10000:e0:17.0 SATA controller: Intel Corporation Device a0d3 (rev 20)
10000:e1:00.0 Non-Volatile memory controller: Sandisk Corp WD Blue SN550 NVMe SSD (rev 01)

These caused the problem.

Most of the PCI devices looks like below, with no letter “e”:

0000:00:00.0 Host bridge: Intel Corporation 11th Gen Core Processor Host Bridge/DRAM Registers (rev 01)
0000:00:02.0 VGA compatible controller: Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01)
0000:00:04.0 Signal processing controller: Intel Corporation TigerLake-LP Dynamic Tuning Processor Participant (rev 01)
0000:00:06.0 System peripheral: Intel Corporation Device 09ab
0000:00:08.0 System peripheral: Intel Corporation GNA Scoring Accelerator module (rev 01)
0000:00:0a.0 Signal processing controller: Intel Corporation Tigerlake Telemetry Aggregator Driver (rev 01)
0000:00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 USB Controller (rev 01)
0000:00:0e.0 RAID bus controller: Intel Corporation Volume Management Device NVMe RAID Controller
0000:00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 xHCI Host Controller (rev 20)
0000:00:14.2 RAM memory: Intel Corporation Tiger Lake-LP Shared SRAM (rev 20)
0000:00:14.3 Network controller: Intel Corporation Wi-Fi 6 AX201 (rev 20)
0000:00:15.0 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #0 (rev 20)
0000:00:15.1 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #1 (rev 20)
0000:00:16.0 Communication controller: Intel Corporation Tiger Lake-LP Management Engine Interface (rev 20)
0000:00:17.0 System peripheral: Intel Corporation Device 09ab
0000:00:1f.0 ISA bridge: Intel Corporation Tiger Lake-LP LPC Controller (rev 20)
0000:00:1f.3 Multimedia audio controller: Intel Corporation Tiger Lake-LP Smart Sound Technology Audio Controller (rev 20)
0000:00:1f.4 SMBus: Intel Corporation Tiger Lake-LP SMBus Controller (rev 20)
0000:00:1f.5 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP SPI Controller (rev 20)

Hope this helps.
Raspyvotan

Hello. I have the same problem.
But I can’t disable in BIOS: Intel Volume Management Engine.
My new MSI laptop does not support disabling the Intel Volume Management Engine.
Are there alternative ways to make the “Devices” tab active?

Try to use rd.qubes.hide_pci grub option for your 10000:xx:xx.x devices like this:

UPD:
On second thought maybe you won’t be able to hide them with rd.qubes.hide_pci since Qubes OS is handling this and it won’t see these 10000:xx:xx.x devices as it can’t see them in qvm-pci.
Maybe there’s some Xen option for this as well but I don’t know of it.

This issue should be fixed in a next Qubes OS release:

Or you can try weekly build right now:

Thanks, it worked =)