Ok. First of all, it appears to be a SATA drive. The maximum reported read bandwidth is ~433MB/s for varlibqubes pool sequential Q1T1 read. And for sure, it has a lot of variations:
If you were on r4.3, you could have tried the same test with revisions_to_keep set to -1 to disable volume snapshot. CoW might be a contributor to performance degrade.
I have an ignorant question here…
Do the Qubes OpenQA tests happen sometimes/always on a bare xen hypervisor?
I tried reading about it, and it sounded a bit like it was in a nested configuration inside KVM, but I wasn’t sure at all.
It is another question to ask whether that would affect these tests!
Another interesting variable is the possibility that the writer/reader can use hugepages for large r/w.
I read somewhere that memory ballooning can prevent it due to fragmentation - even if it is not interesting to you to used fixed memory, it could falsify the benchmarks, if it was sometimes active and sometimes not.
I think it would also require large I/O blocksize (>2M?), and to be enabled in kernels(default?), but it seems xen supports it transparently, by this page at xenproject.org.
[Edit: I was thinking abou this after the recent “7z benchmark” thread… maybe there is some interest to compare multiple benchmarking methods]
[Edit2: note to self - does memory ballooning align allocation to hugepage boundaries, and could that make a performance difference? (… probably already taken care of!))
@alimirjamali
I’m using 4.3, settings revisions to -1 doesn’t seem to change anything.
Running kdiskmark uses a lot of CPU in the VM, I think the bottleneck is the fio process. Adding more cores does not seem to improve performance, so it could be limited by single thread performance.
@phceac
I have a lot of free memory, I don’t think Xen relocating any memory.