Customizing libvirt xml settings for single VM

I would like to experiment with different flags/options for block devices and PCI passthrough devices, as well as a few other low-level items that are abstracted by the convenient interface that Qubes provides

After looking into how qvm-features works, what I determined is that there are Jinja2 templates in /usr/share/qubes/libvirt/ that have a bunch of logic in them to render a libvirt XML file dynamically, based in part on qvm-features settings. This is neat, and because I’m familiar with Jinja2, it’s very accessible to me

However, before I start tweaking global or semi-global templates and adding my own custom qvm-features to make it a nice clean solution, I would like to experiment on a one-off basis, not worrying about dealing with template logic if possible

Is this as simple as editing the rendered form of the XML files in /etc/libvirt/libxl? Or are those freshly rendered each time a VM boots? Will changes there be lost when stopping/starting a VM? Regardless, what’s the recommended way to tweak individual settings within libvirt XML for a specific VM? Should I just bite the bullet and guard changes made directly to the templates with a qvm-features with a setting like oneoff_testing?

The documentation for libvirt XML is pretty good, so I don’t think working with the files will be difficult. I’m just looking for the best way to do this in Qubes

I’m a moron

I see now at the top of each VM file it says to use virsh edit <vmname>

One of the things I would like to do is test performance with different disk i/o settings - Qubes defers most settings to whatever the libvirt default is. I would like to benchmark the different io and thread settings described here for example, and any others that may be relevant

I was also looking for a way to persistently set CPU pinning, which I think I can do this way also

If anyone else has already looked into the disk performance, I would be happy to know what you found. I have PCIe 4.0 and PCIe5.0 disks for my pools and I’m determined to squeeze every dollar out of them. They’re expensive :slight_smile:

This may not yield anything useful. But at least I’ll learn something

Last time I checked, editing with virsh doesn’t persist the change because the configuration is replaced at boot time.
There’s documentation on this in the Qubes dev documentation if you want to take a look:
https://dev.qubes-os.org/projects/core-admin/en/latest/libvirt.html

1 Like

Yep, you’re right. Just figured this out :slight_smile:

I ended up making a quick modification directly to the xen.xml template in a hacky way, based on a hard-coded vm name (“personal”):

--- /root/xen.xml	2024-11-24 13:30:33.505233927 -0500
+++ xen.xml	2024-11-24 13:41:18.061307463 -0500
@@ -128,7 +128,11 @@
             %}
             {% for device in vm.block_devices %}
                 <disk type="block" device="{{ device.devtype }}">
+                    {% if vm.name == 'personal' and device.name == 'private' %}
+                    <driver name="phy" io="native" iothread="2" />
+                    {% else %}
                     <driver name="phy" />
+                    {% endif %}
                     <source dev="{{ device.path }}" />
                     {% if device.name == 'root' %}
                         <target dev="xvda" />

With io="native" and iothread="2", the read and write benchmark was ~10% faster for throughput as well as iops, which is interesting. It was just a quick test, so I would need to be a little more scientific, but involved only a single run of the following stolen fio wrapper:

#!/bin/bash
# Credit: https://cloud.google.com/compute/docs/disks/benchmarking-pd-performance

TEST_DIR=$1
mkdir -p $TEST_DIR

# Test write throughput by performing sequential writes with multiple parallel streams (8+), using an I/O block size of 1 MB and an I/O depth of at least 64:
fio \
  --name=write_throughput \
  --directory=$TEST_DIR \
  --numjobs=4 \
  --size=100M \
  --time_based \
  --runtime=60s \
  --ramp_time=2s \
  --ioengine=libaio \
  --direct=1 \
  --verify=0 \
  --bs=1M \
  --iodepth=64 \
  --rw=write \
  --group_reporting=1
# Clean up
rm -f $TEST_DIR/write* $TEST_DIR/read*

# Test write IOPS by performing random writes, using an I/O block size of 4 KB and an I/O depth of at least 64:
fio \
  --name=write_iops \
  --directory=$TEST_DIR \
  --size=100M \
  --time_based \
  --runtime=60s \
  --ramp_time=2s \
  --ioengine=libaio \
  --direct=1 \
  --verify=0 \
  --bs=4K \
  --iodepth=64 \
  --rw=randwrite \
  --group_reporting=1
# Clean up
rm -f $TEST_DIR/write* $TEST_DIR/read*

# Test read throughput by performing sequential reads with multiple parallel streams (8+), using an I/O block size of 1 MB and an I/O depth of at least 64:
fio \
  --name=read_throughput \
  --directory=$TEST_DIR \
  --numjobs=4 \
  --size=100M \
  --time_based \
  --runtime=60s \
  --ramp_time=2s \
  --ioengine=libaio \
  --direct=1 \
  --verify=0 \
  --bs=1M \
  --iodepth=64 \
  --rw=read \
  --group_reporting=1
# Clean up
rm -f $TEST_DIR/write* $TEST_DIR/read*

# Test read IOPS by performing random reads, using an I/O block size of 4 KB and an I/O depth of at least 64:
fio \
  --name=read_iops \
  --directory=$TEST_DIR \
  --size=100M \
  --time_based \
  --runtime=60s \
  --ramp_time=2s \
  --ioengine=libaio \
  --direct=1 \
  --verify=0 \
  --bs=4K \
  --iodepth=64 \
  --rw=randread \
  --group_reporting=1

# Clean up
rm -f $TEST_DIR/write* $TEST_DIR/read*

It has me a little curious now

Can’t emphasize enough though, that this was not a thorough scientific test - each was done after a fresh reboot, so there shouldn’t be any issues with a cache being an issue, but I also only ran each test once with each setting (default, then with threads and io setting tweaked a bit)

Thank you, I don’t know how I missed those docs- I see now what I’m looking for is this bit:

In order of increasing precedence: the main template, from which the config is generated is /usr/share/qubes/templates/libvirt/xen.xml). The distributor may put a file at /usr/share/qubes/templates/libvirt/xen-dist.xml) to override this file. User may put a file at either /etc/qubes/templates/libvirt/xen-user.xml or /etc/qubes/templates/libvirt/xen/by-name/<name>.xml, where <name> is full name of the domain. Wildcards are not supported but symlinks are.