Understanding the code path for admin.vm.device.pci.Attach

Following the code in qubes-core-admin I can see that:

  • QubesAdminAPI.vm_device_attach calls DeviceCollection.attach
  • … which emits device-attach:pci

Now qubes/device.py documents that extensions should provide handlers for several events, eg. for PCI device-list:pci and device-attach:pci (the description is a bit confusing, using sometimes :bus and sometimes :class, without making apparent that it is a placeholder - at least those 2 examples both use the same form). We can see in qubes/ext/pci.py several handlers, including one for device-list:pci.

But not only this extension does not appear to provide such a handler for device-attach:pci, but I could not see where such an event could be handled. I could use a hint :slight_smile:

My quest here is to find how we can provide a ROM file for GPU passthrough (which appears to be necessary for by RENOIR iGPU).

Also qubes-devices documents that DeviceInfo should handle device-attach:bus and device-detach:bus events for performing the attach/detach action. That does not seem to be the case either ?

After more digging, I realized that for pci (and others, including block) the work is done in the device-pre-attach handler rather than device-attach, though nothing seems to be done unless vm.is_running(). If the VM is running, then the device is attached to pciback and attached to the domain, in the form of inserting some XML derived from libvirt/devices/pci.xml template into libvirt’s XML domain description, which is consistent with the device in self.assignments() check in DeviceCollection.attach to prevent a device getting attached twice.

While this seems enough to get me going, I’m still puzzled by the not vm.is_running() case, which seems to do nothing. Could it be just unrelated to the user’s idea of a running VM ?

I tried to add traces to dig firther but could not find where vm.log.info() and friends end up. I could find traces in qubes.log that look like they’ve been produced by such calls, but no output reached this file in several months (upgrade from 4.0.4 to 4.1beta ?). I finally resorted to writing to a separate file, and found out a bit more from there, but I still don’t see when the PCI device attachment gets effectively done.

  • template and standalone VMs get their libvirt xml written out on disk into /var/lib/xen/, including attached any PCI devices
  • AppVM’s OTOH do not, as if theirs was generated each time from qubes.xml (not really sure why, or even if that’s 100% accurate - @marmarek do you confirm ? it would be great to be able to look at the generated libvirt configuration here too!)

For all the VM types, libvirt config is handled the same: it’s generated and sent to libvirt daemon at VM start time, and remain there until VM is removed. But it’s updated only when VM is running, if VM is halted it may be not there at all, or be outdated.
You can get it with virsh -c xen dumpxml <vmname> (after VM was started)

As for the PCI devices, those assigned before VM start, are included in the libvirt template directly: https://github.com/QubesOS/qubes-core-admin/blob/master/templates/libvirt/xen.xml#L157

The device-pre-attach:pci handler (https://github.com/QubesOS/qubes-core-admin/blob/master/qubes/ext/pci.py) is relevant only for dynamic attach (when the VM is already running).