Setting a VM that uses its own compiled kernel as a gateway prevents other VMs from booting

I’m using my own compiled kernel inside a debian VM along with pvgrub2-pvh, it runs fine, but when I set it to provide networking, other VMs that use it as a network will not start, anyone willing to help me?

Do you see some error notification with the reason of qube start fail?

Device drivers? It might help to further diagnose if you’d try with fedora-38 as well? I have such f-38 networking qube and it works well.


I’m using kicksecure and I’ve found that if I use my own compiled kernel, when other VMs set this VM as a gateway, the other VMs refuse to boot.

Can you give more details?
What happens when you try to start your qubes connected to kicksecure netvm with custom kernel (lets call it sys-net-ks-custom-kernel)?
Nothing happens and its as you didn’t even try to start a qube?
Qube starts but it fails with some error? Do you see some notifications with the error message?
Are your qubes connected to sys-net-ks-custom-kernel also based on kicksecure template with custom kernel?
What if you try to start the qubes based on some default Qubes OS template (e.g. debian-12 without custom kernel) with sys-net-ks-custom-kernel as netvm? Will they fail?

I used my own compiled kernel for the kicksecure VM, and also used pvgrub2-pvh, and everything works fine, except for one error: if other VMs set this VM as a gateway, the other VMs will get stuck in the qubes admin panel during startup for 5 seconds, and then report an error saying that the VM startup was unsuccessful. If I use only pvgrub2-pvh without custom kernel, then other VMs that have this VM as gateway can boot normally. I think this is a very serious error, please ask the development team to verify if there is a bug, thanks.

@Felicia I use the Qubes Mirage firewall, which uses an entirely custom kernel (coded in OCAML) as the gateway for all my qubes and it works perfectly fine. If you boot your qube with your custom kernel, does it boot? Can you get access to that qube through a terminal or a console? Are there any errors?

No errors, it works. But if other VMs use it as a gateway, other VMs can’t start, I’m using a template based pvh VM, not hvm.

Check the logs of the qube that failed to start in dom0 terminal with sudo journalctl -b and the qube boot log file /var/log/xen/console/guest-qubename.log (or you can see this log in Qube Manager). See if there’re more specific errors.

It’s possible that some of the Qubes daemons in that qube failed to start due to the custom kernel missing some component. If you check the boot log for that qube it should be fairly easy to pinpoint the cause. Worst case scenario compare the logs between the standard kernel and the custom kernel to spot the differences.

dom0 qubesd[6039]: unhandled exception while calling src=b’dom0’ meth=b’admin.vm.Start’ dest=b’qubename’ arg=b’’ len(untrusted_payload)=0.

It also indicates an internal error, libxenlight was unable to generate a new domain.

Can you post the full error log? Maybe there’re some clues there. Just unhandled exception is too broad problem.
Full error log with traceback should look like this:

Dec 27 11:55:07 dom0 qubesd[1822]: unhandled exception while calling src=b'dom0' meth=b'admin.vm.device.pci.Available' dest=b'dom0' arg=b'' len(untrusted_payload)=0
Dec 27 11:55:07 dom0 qubesd[1822]: Traceback (most recent call last):
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/api/", line 286, in respond
Dec 27 11:55:07 dom0 qubesd[1822]:     response = await self.mgmt.execute(
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/api/", line 1217, in vm_device_available
Dec 27 11:55:07 dom0 qubesd[1822]:     devices = self.dest.devices[devclass].available()
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/", line 376, in available
Dec 27 11:55:07 dom0 qubesd[1822]:     devices = self._vm.fire_event('device-list:' + self._bus)
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/", line 195, in fire_event
Dec 27 11:55:07 dom0 qubesd[1822]:     sync_effects, async_effects = self._fire_event(event, kwargs,
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/", line 168, in _fire_event
Dec 27 11:55:07 dom0 qubesd[1822]:     effects.extend(effect)
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/ext/", line 191, in on_device_list_pci
Dec 27 11:55:07 dom0 qubesd[1822]:     yield PCIDevice(vm, None, libvirt_name=libvirt_name)
Dec 27 11:55:07 dom0 qubesd[1822]:   File "/usr/lib/python3.8/site-packages/qubes/ext/", line 140, in __init__
Dec 27 11:55:07 dom0 qubesd[1822]:     assert dev_match
Dec 27 11:55:07 dom0 qubesd[1822]: AssertionError

Maybe also some related messages before and after it.
You can also check for libxl errors here:

I can’t understand this tutorial, I packaged the kernel as a .deb and installed it using the dpkg -i command, am I doing something wrong? Does this article mean that VMs can’t install custom kernels but only custom modules with dkms?

I think building with DKMS was required because Qubes OS needs custom u2mfn kernel module.
According to this issue, starting with Qubes OS 4.1 this kernel module is no longer required:

But I didn’t test it so I’m not sure.

I am all about building kernel with make, can you tell me the instructions for dkms to build kernel? Also tell me if it is possible to move to this VM after building the kernel in another VM?

I didn’t try it myself but my guess would be that you can install kernel image and headers deb files with dpkg, then run dkms to build kernel modules for this custom kernel and then update initramfs following the guide that you linked.

In which directory does the dkms command run? What are the objects? Can you give an example?

DKMS don’t need to be run in any specific directory if you just want to build the module for some specific kernel. You just need to run the commands from the guide:

sudo dkms autoinstall -k 3.16.0-4-amd64
sudo update-initramfs -u

Replace 3.16.0-4-amd64 with the version of your custom kernel.

Does this tutorial mean that it doesn’t support installing self-compiled kernels, only self-compiled modules?

My understanding is it doesn’t care how you install your kernel you just need to build and install the kernel module needed by Qubes OS for this kernel with DKMS (or some other way but DKMS is just easier).

1 Like