LnkSta dropping to 2.5GT/s in both dom0 and appVM when passed through

I’m passing through two GPUs to a sys-ollama appVM and have been doing so for a while now

Both GPUs are connected to PCIe 4.0 slots (x16 electrical) and running at PCIe 4.0 8x. They’re attached via qvm-pci attach --persistent -o no-strict-reset=True -o permissive=True (I also tried without the no-strict-reset=True and had the same behavior)

The behavior: When I start up Qubes and look in dom0, I see what I expect for each of the GPUs:

$ for bdf in $(sudo lspci | grep -E 'VGA.*NVIDIA' | cut -d ' ' -f 1); do
sudo lspci -s $bdf -vvv | grep LnkSta: ; done
LnkSta: Speed 16GT/s, Width x8 (downgraded)
LnkSta: Speed 16GT/s, Width x8 (downgraded)

The “downgraded” bit is because of the x8 and is expected

What is not expected is when I start up sys-ollama, the LnkSta changes to the following, in BOTH dom0 and the appVM:

LnkSta: Speed 2.5GT/s (downgraded), Width x8 (downgraded)
LnkSta: Speed 2.5GT/s (downgraded), Width x8 (downgraded)

When I detach it, it returns to the full speed in dom0 lspci -vvv

I’m almost certain that I didn’t encounter this previously, and it seems to take a very long time when loading a model in the appVM, so I think it is accurately reporting the slow link speed - maybe imagining it, but I don’t think so…

There are two differences since the time I last looked closely at this, when it was working as expected:

  1. I’ve made physical changes to the hardware configuration (notably, moving the GPUs around - an obvious variable of interest)
  2. Newer kernels (currently on 6.6.63-1.qubes.fc37.x86_64 in dom0 and the Debian 12 kernel 6.1.0-29-amd64 in the appVM
  3. Probably at least 1 or 2 Qubes updates (this is Qubes R4.2.3)
  4. A few updates of the AppVM, which likely included nvidia-drivers. Currently on 570.86.15

Anyone knowledgeable about the hardware or software/virtualization side of this able to tell me what might be going on here?

It seems peculiar to me that dom0 reports the expected link speed until I attach it to the appVM, and returns to the expected link speed when I detach it

I will attach journalctl -b / dmesg output if necessary, if this isn’t some simple known issue or silly oversight on my part

EDIT: I have AER enabled and I checked both dom0 and the appVM, I don’t see any AER messages indicating PCI issues

I do see the following in guest-sys-ollama-dm.log but I’m not sure it would impact the PCI link speed:

[2025-01-30 10:20:43] [00:06.0] xen_pt_check_bar_overlap: Warning: Overlapped to device [00:02.0] Region: 1 (addr: 0xf4000000, len: 0x1000000)
[2025-01-30 10:20:43] [00:06.0] xen_pt_region_update: Warning: Region: 0 (addr: 0xf4000000, len: 0x1000000) is overlapped.
[2025-01-30 10:20:43] [00:07.0] xen_pt_check_bar_overlap: Warning: Overlapped to device [00:03.0] Region: 0 (addr: 0xf5000000, len: 0x1000000)
[2025-01-30 10:20:43] [00:07.0] xen_pt_region_update: Warning: Region: 0 (addr: 0xf5080000, len: 0x4000) is overlapped.

When I grep for the BDF of the two GPUs, this is the full output from that log:

[2025-01-30 10:20:42] [00:06.0] xen_pt_realize: Assigning real physical device 05:00.0 to devfn 0x30
[2025-01-30 10:20:42] [00:06.0] xen_pt_register_regions: IO region 0 registered (size=0x01000000 base_addr=0xf4000000 type: 0x0)
[2025-01-30 10:20:42] [00:06.0] xen_pt_register_regions: IO region 1 registered (size=0x800000000 base_addr=0x10800000000 type: 0x4)
[2025-01-30 10:20:42] [00:06.0] xen_pt_register_regions: IO region 3 registered (size=0x02000000 base_addr=0x11000000000 type: 0x4)
[2025-01-30 10:20:42] [00:06.0] xen_pt_register_regions: IO region 5 registered (size=0x00000080 base_addr=0x00001000 type: 0x1)
[2025-01-30 10:20:42] [00:06.0] xen_pt_register_regions: Expansion ROM registered (size=0x00080000 base_addr=0xf5000000)
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0010 mismatch! Emulated=0x0000, host=0xf4000000, syncing to 0xf4000000.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0014 mismatch! Emulated=0x0000, host=0x000c, syncing to 0x000c.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0018 mismatch! Emulated=0x0000, host=0x0108, syncing to 0x0108.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x001c mismatch! Emulated=0x0000, host=0x000c, syncing to 0x000c.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0020 mismatch! Emulated=0x0000, host=0x0110, syncing to 0x0110.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0024 mismatch! Emulated=0x0000, host=0x1001, syncing to 0x1001.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x0062 mismatch! Emulated=0x0000, host=0x0003, syncing to 0x0003.
[2025-01-30 10:20:42] [00:06.0] xen_pt_pm_ctrl_reg_init_off: PCI power management control passthrough is off
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x006a mismatch! Emulated=0x0000, host=0x0080, syncing to 0x0080.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x007c mismatch! Emulated=0x0000, host=0x112c8de1, syncing to 0x12c8de1.
[2025-01-30 10:20:42] [00:06.0] xen_pt_config_reg_init: Offset 0x008a mismatch! Emulated=0x0000, host=0x1084, syncing to 0x1084.
[2025-01-30 10:20:42] [00:06.0] xen_pt_pci_intx: intx=1
[2025-01-30 10:20:42] [00:06.0] xen_pt_realize: Real physical device 05:00.0 registered successfully
[2025-01-30 10:20:42] [00:07.0] xen_pt_realize: Assigning real physical device 05:00.1 to devfn 0x38
[2025-01-30 10:20:42] [00:07.0] xen_pt_register_regions: IO region 0 registered (size=0x00004000 base_addr=0xf5080000 type: 0x0)
[2025-01-30 10:20:42] [00:07.0] xen_pt_config_reg_init: Offset 0x0010 mismatch! Emulated=0x0000, host=0xf5080000, syncing to 0xf5080000.
[2025-01-30 10:20:42] [00:07.0] xen_pt_config_reg_init: Offset 0x0062 mismatch! Emulated=0x0000, host=0x0003, syncing to 0x0003.
[2025-01-30 10:20:42] [00:07.0] xen_pt_pm_ctrl_reg_init_off: PCI power management control passthrough is off
[2025-01-30 10:20:42] [00:07.0] xen_pt_config_reg_init: Offset 0x006a mismatch! Emulated=0x0000, host=0x0080, syncing to 0x0080.
[2025-01-30 10:20:42] [00:07.0] xen_pt_config_reg_init: Offset 0x007c mismatch! Emulated=0x0000, host=0x12c8de1, syncing to 0x12c8de1.
[2025-01-30 10:20:42] [00:07.0] xen_pt_config_reg_init: Offset 0x008a mismatch! Emulated=0x0000, host=0x1084, syncing to 0x1084.
[2025-01-30 10:20:42] [00:07.0] xen_pt_pci_intx: intx=2
[2025-01-30 10:20:42] [00:07.0] xen_pt_realize: Real physical device 05:00.1 registered successfully
[2025-01-30 10:20:43] [00:06.0] xen_pt_check_bar_overlap: Warning: Overlapped to device [00:02.0] Region: 1 (addr: 0xf4000000, len: 0x1000000)
[2025-01-30 10:20:43] [00:06.0] xen_pt_region_update: Warning: Region: 0 (addr: 0xf4000000, len: 0x1000000) is overlapped.
[2025-01-30 10:20:43] [00:07.0] xen_pt_check_bar_overlap: Warning: Overlapped to device [00:03.0] Region: 0 (addr: 0xf5000000, len: 0x1000000)
[2025-01-30 10:20:43] [00:07.0] xen_pt_region_update: Warning: Region: 0 (addr: 0xf5080000, len: 0x4000) is overlapped.
[2025-01-30 10:20:46] [00:06.0] xen_pt_msgctrl_reg_write: setup MSI (register: 81).
[2025-01-30 10:20:46] [00:06.0] xen_pt_msi_setup: MSI mapped with pirq 279.
[2025-01-30 10:20:46] [00:06.0] msi_msix_update: Updating MSI with pirq 279 gvec 0x22 gflags 0x2 (entry: 0x0)
[2025-01-30 10:20:46] [00:06.0] msi_msix_update: Updating MSI with pirq 279 gvec 0x22 gflags 0x2 (entry: 0x0)

When the card is idle, it will drop the speed to 2.5GT/s, and when you use the card it goes back to 16GT/s.

1 Like

Oops! Yes, I see that you’re correct

I wonder if previously I had PCI link power management off before and it it’s now on? I’m certain it didn’t have this behavior before …

Also, the reason the LLM is so slow to load, I guess, is because it’s so much larger than others I’ve used (~40GB vs the 4-10GB) and the disk it’s on can only read ~300MB/sec

Thank you, I was about to take the GPUs out and reseat them, which is a slow miserable task, given the bubble gum and zip ties holding them in place