There seems to be an error running reencrypt without direct io which shows 50MiB in 512 sector size, probably a hardware error, but I’m clearly seeing a 500h++ ETA, hmm…
But as you can see in the new benchmark on 4096 without direct io, it looks better.
I have try browsing, watching movie, and etc, everything seems fine.
FYI, loop device still using 512b sector size, I have a workround for that, but i think its complicated for non tech person to apply. if you want to open an issue, kindly open this too.
and let’s see for part 2 in the Firecuda nvme, maybe i will make a guide for this.
Qubes OS should definitely default to 4096 byte sectors unless it has reason to believe a different sector size is better. My understanding is that 512 byte sectors are almost always emulated nowadays, with the actual sector size being 4096 bytes.
yes fs reformat would automatically use 4096 if the sector size already use 4096, i have at least 4 ssd with 4k support, but none of them are using 4096 as default, perhaps it’s because compability issue, that’s why many vendor not use it as default…
lvcreate -V20G -T qubes_dom0/root-pool -n root ( Why 20G is not enough for simple recovery if not multiple templates being installed at the same time from dom0? I a not ready to reserve 40G for dom0 and preferred when it was in the same vm-pool to grow dynamically with better warnings, but agree of it being in distinct pool now. )
lvcreate -VxxxG -T qubes_dom0/vm-pool -n vm ( ex : -V60G / -V360G )
mkfs.ext4 /dev/qubes_dom0/vm (no need to specify sector size, if your disk is already use 4096)
ctrl + alt + f6 (return to installer)
enter disk and rescan, choose drive, custom (not blivet), click unknown, and set :
1 GiB > format > ext 4 > mount point /boot > update
40 GiB > format > xfs / ext 4 > mount point / > update
(swap) > format > swap > update
leave qubes_dom0/vm alone.
click done. Accept changes, Begin installation.
Second stage install: went ahead and configured system as wanted.
Templates installs, but install fails at configuring sys-firewall. On reboot, all vms are there but none starts properly.
When looking at /dev/mapper/sys: sys-net, sys-firewall, sys-usb related filesystems were not created, which of course fails when starting VMs.
I’m try to answer based on what question i found there.
fdisk definitely work if you installing using bios, and gdisk for uefi.
partition tables can expect 1MiB offset for the begin of the first partition, means 2048 sectors for 512b or 256 sectors for 4kb disks.
so when you fdisk there, the first sector should show the default first sector 256 which is fine and good to use.
actually 20gb is enough if we can manage what data on root partition, as example sometimes we install 2-3 template at once it could cause template install fails because not enough space in dom0, and some users here experience it (back then when everyone was fail installing kali template), and i think you might fail too, since 20gb is not enough for installing 4 default template, except you install 1 by 1, and delete previous data.
If you want to give btrfs a try it also good, everything is work out of the box. for the layout you can find here just ignore 1-2 thing there in the drive section.
As far as I understand, the reason why 4K dm-crypt breaks some VM volumes on LVM Thin but not on Btrfs is a combination of two things.
LVM Thin uses the same logical sector size as the underlying (dm-crypt) block device. And then a 4K LVM Thin block device in dom0 results in a 4K xen-blkfront block device in the VM, because Xen automatically passes through the logical sector size.
Whereas file-reflink layouts like Btrfs use loop devices, which are currently always configured (by /etc/xen/scripts/block from Xen upstream) with 512 byte logical sectors - again passed through to the VM.
The “root” xvda and “volatile” xvdc volumes don’t properly work with 4K sectors because they are disk images containing a GPT/MBR partition table, which can only specify sizes and locations in sector units:
The VM initramfs script formatting “volatile” on every VM start currently assumes that a sector is 512 bytes, which should be straightforward to fix (WIP)
It’s going to be more difficult to somehow make the “root” volume sector-size agnostic…
(The “private” xvdb and “kernel” xvdd volumes seem to work fine if /etc/xen/scripts/block is patched to configure them with 4K sectors. They’re just ext4/ext3 filesystem images without a partition table.)
I don’t get the question. What does fine mean? And aren’t you benchmarking 512-byte sectors on XFS/file-reflink varlibqubes, vs. 4K sectors on LVM Thin vm-pool (using IIUC a custom partitioned TemplateVM root volume) - which would be two very different storage drivers? Oh, your vm-pool is XFS/file-reflink on top of LVM Thin? Okay that would be the same Qubes storage driver then, but it’s still a different (and unusual) storage stack.
@rustybird: This is really interesting!!! Please poke me on updates of this. Won’t land under Qubes before next release for sure, but this is really pertinent advancement in my opinion, even more if applying to default partition scheme (thin lvm, seperated root/vm pools).
I recently installed qubes exactly as what you have described, creating partitions, specifying qvm-pool, manually installing templates, etc. It all worked well, and I’m grateful for your instructions.
However, I could’t get any of VM to start. In their log, I saw that they complained about the filesystem of /xvdc, as you have described in that GitHub issue. I think that line of qvm-pool command was intentioned to avoid this ( by using lvm thin pool, as you said on GitHub), but unluckily it didn’t work for me.
Should I reinstall qubes, or should I build 4kn templates and find a way to transfer them into dom0 without any VM running? Thank you!
Btw, my self-built 4kn template also fails to start for the same reason, in qubes on a 512e ssd.
@rustybird Sorry I was not more specific: I meant for the root and private volumes creation: was that tested working?
So if I understand well, I could apply your patch and have volatile volume fixed. But for creating root volumes and private volumes, I would need to build ISO, or patch stage 1 and stage 2 install so that when templates are decompressed, those are fixed to create a working system to be able to compare performance properly with/without the fixes.
I was looking for next steps to get main devs attention in seeing actual performance losses/ differences in this thread.
Otherwise, people are trying to get away of LVM thin provisioning model at install as of now. Some wants ZFS,XFS/BRTFS since speed differences are quite important.
Fixing LUKS+LVM thin provisioning would be great. Otherwise LVM is blamed for performance losses as of now where other implementations are simply not suffering from the same implementation flaws that LVM thin provisioning is suffering from, per Qubes implementation of volatile, private and root volumes creation.
Not sure that I understand your question, but standard (i.e. not in like a standalone HVM) private volumes are already sector-size agnostic in their content, so compatibility wise it doesn’t matter whether they are presented to the VM as 512B or 4KiB block devices.
Standard root volumes have sector-size specific content, and I don’t think it’s feasible to dynamically patch that volume content (specifically, the partition table) in dom0, because it contains untrusted and potentially malicious VM controlled data.
Backward compatibility is a real headache here. It seems like the existing root and private volumes should simply be presented to the VM as 512B devices by default for now. In the case of an LVM installation layout, that might even entail forcing 512B sectors for the whole LUKS device - unless there’s a good way to set an independent sector size for the LVM pool or ideally per LVM volume.