Originally posted this as a reply elsewhere but thought it may have broad usefulness to folks who would like to test out qubes in a VM with HVM support (with fully functional L2 nested VMs).
I’ve been able to get Qubes up and running in a VM using the 4.2 weekly build ISO (haven’t tried 4.1 yet).
I used kvm/qemu configured via VMM (Virtual Machine Manager) running on bare metal fedora-37, with some config changes for compatibility and to support vIOMMU usage.
I am using an AMD Zen 2 Family CPU and obviously the bare metal has virtualization and iommu enabled in the bios. I installed fedora-37 using the standard iso+Rufus+usb key, and fully updated it. I don’t recall if VMM was already installed or not…if not pretty easy to find and download from the fedora repos using the Software application.
I created the VM using the Generic Linux 2022 option, set memory/storage/etc. and then chose the easy-to-miss “Customize Configuration before install” checkbox on the page that had the finish button on it. Note that sometimes when I do that the iso I chose in the wizard ends up not set and I have to set it again under the sata cdrom 1 node.
VMM needed some additional manual configs (via UI clicky or UI XML tab, depending):
_Verify Generic Linux 2022 baseline. This should default to a virtual Q35 chipset which is the only way intel/amd vIOMMU can be supported, at least from what I have read.
_Change Firmware from BIOS to UEFI, as this may be necessary for vIOMMU support.
_For now I set CPU to one socket and eight cores (1 thread each)…I can later experiment up to 32 with the current hardware. Depending on your processor layout you may want to experiment with using sockets instead of cores. From what I’ve read vIOMMU can be sensitive to NUMA or quasi-NUMA issues (though perhaps only if actually passing in real hardware(?), but I’m not there yet). One way may have better performance but less stability than the other, depending on hardware.
_Changed NIC to e1000 from default of virtio (the default which did not work…it locked up sys-net).
_Initial setup had both usb tablet as input0 and ps2 mouse as input1. When I finally got viommu working the pointer stopped working (presumably due to sys-usb starting up). To fix that I removed the tablet device and set the ps2 mouse as input0.
_Changed video from Virtio (the default which did not work) to VGA. Upped vram to 65536 from 16384 in xml for larger screen options. Video performance is poor, I may circle back to this later.
_Added RNG /dev/urandom
_In VMM Overview, where you can edit the entire XML file, you also need to add a manual clause in the devices section (closer to the bottom of the file).
Lastly in the VM bios on first boot, disable secure boot so that the iso can actually be booted!
Qubes’s grub needed an addl line (what i have below works, though includes some ignored bits) and then it has to be rebuilt:
#add next line at end
GRUB_CMD_LINE=“$GRUB_CMD_LINE iommu=on iommu=pt amd_iommu=on intel_iommu=on”
grub2-mkconfig > /etc/grub2.cfg
I also created the file /etc/modprobe.d/vfio.conf but am not sure if it was necessary. Contents are:
options vfio_iommu_type1 allow_unsafe_interrupts=1
PS The above solution integrated some random things I came across with some of the advice here:
PPS-if you haven’t used VMM before it’s really easy to miss the View/Details menu option, which is how you can continue to modify the configuration between VM restarts. View/Console brings you back to the UI of the VM.