Thanks. So if I understand correctly, it’s top level definition is in templates.json:
{
"description": "basic tests on ZFS filesystem (zfs pool)",
"name": "system_tests_basic_vm_qrexec_gui_zfs",
"settings": [
{
"key": "NUMDISKS",
"value": "2"
},
{
"key": "PARTITIONING",
"value": "zfs"
},
{
"key": "START_AFTER_TEST",
"value": "system_tests_update"
},
{
"key": "SYSTEM_TESTS",
"value": "qubes.tests.integ.basic qubes.tests.integ.vm_qrexec_gui:14400"
}
]
},
main.pm then directs it to “switch_pool.pm” because we set the PARTITIONING=zfs flag
Then in switch_pool.pm, we setup the partitioning, and as long as there is no error returned, it counts as passing.
} elsif (get_var('PARTITIONING') eq 'zfs') {
assert_script_run('qubes-dom0-update -y zfs', timeout => 900);
assert_script_run('modprobe zfs zfs_arc_max=67108864');
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sdb');
assert_script_run('zpool create -f testpool /dev/sdb1');
assert_script_run('qvm-pool add --option container=testpool pool-test zfs');
Then it moves all the templates to the new pool
Then presumably it runs all the other tests after that, with any templates being pulled from the new pool.
Now, presumably, if we changed NUMDISKS to 5:
"key": "NUMDISKS",
"value": "5"
and changed:
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sdb');
assert_script_run('zpool create -f testpool /dev/sdb1');
to:
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sdb');
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sdc');
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sdd');
assert_script_run('printf "label: gpt\n,,L" | sfdisk /dev/sde');
### for write performance do this:
assert_script_run('zpool create -f testpool /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1');
### for read performance do this:
assert_script_run('zpool create -f testpool mirror /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1');
### for 50% redundancy do this:
assert_script_run('zpool create -f testpool raidz2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1');
we could test it with 4 drives.
Now i don’t see times/durations getting returned anywhere, but I do see timestamps in some of the logs, so times/durations could be manually computed. So for example the time duration of the migration of the templates could serve as a zfs write speed test (with a confounding factor of lvm readspeed getting tested at the same time). Presumably the time it takes a later test to launch and run a VM using a template from zfs could be used to indicate a zfs read speed test.
However, you mentioned using kvm. This implies that none of the performance tests would actually be valid, because all the “parallel disks” that would be giving the performance gain are actually partitions on the same physical device.
Actually testing for zfs/btrfs performance enhancements would seem to need some specific hardware setup that I dont know if they have.