I can confirm this behavior. In other words, qvm-volume extend QUBE:private 20G is not always idempotent.
When resizing a dispVMās volume, qvm-volume info disp1234 reports size: 20000000 and accordingly, running qvm-volume extend disp1234:private 20G multiple times succeeds.
When resizing an appVMās volume, the resulting size is slightly larger than 20000000 and re-running the extend command fails.
After playing a little with explicit sizes instead of shorthands: some error messages suggest that the sizes that are specified explicitly must be multiples of 512.
With that clue, I started using the 20Gi shorthand instead of 20G. (Thatās 20x1024x1024 instead of 20x1000x1000 and is this guaranteed to always be a multiple of 512 ā source code) So far, qvm-volume extend vm:private 20Gi is idempotent in all the cases Iāve tested.
Why the dispVM seemed to be fine with 20G, but not the appVM, I donāt know. Iām not even convinced it is relevant that one was an appVM and the other a dispVM. However, Iāll be using 20Gi until I understand exactly where the quirk comes from.
LVM implicitly rounds up to a multiple of 4 MiB, but the lvm_thin Qubes OS storage driver only becomes aware of this afterwards. Thatās why if you resize an lvm_thin volume to a new size that isnāt already divisible by 4 MiB and then try to resize to the same size again, the second resize would initially be seen as an attempt to shrink.
Oh, that would make sense. Thank you @rustybird! Iāll verify that the behavior I observe is consistent with that and report back.
Do you have any idea why the volumes of some VMs (e.g. the dispVM in my case above) might behave differently in that regard? That one seemed just fine with 20G, which is not a multiple of 4Mi
Note: since you took the time to reply, I split the topic for clarity!
Thatās a different bug in lvm_thin: For snap_on_start volumes (e.g. the private volume of a DisposableVM, or the root volume of an AppVM) the driver never looks at the volumeās actual size, so it doesnāt even become aware that it has diverged from the nominal size. Basically that problem happens to be masking the other problem.
It has been a long standing but minor irritation that if I issue the command to set the size to 20G on a VM that that has already been done to, it fails (this happens in my salt stuff to the sys-cacher qube). Itās harmless because the qube is already at a good size, but all I had to do was change it to 20Gi.
[Meanwhile, I just realized my newer schema wasnāt even trying to do it at all because of a typo on my part. Again no issue as long as just āupdatingā an existing sys-cacherā¦but a new one would never have the size set, and Iāll bet that has something to do with the behavior I see on my other system (in essence the cacher doesnāt seem to be saving me any time whatsoever; that could be due to a lack of storage methinks), where I did regenerate the VM from scratch a few weeks ago.]