I can confirm this behavior. In other words, qvm-volume extend QUBE:private 20G is not always idempotent.
When resizing a dispVM’s volume, qvm-volume info disp1234 reports size: 20000000 and accordingly, running qvm-volume extend disp1234:private 20G multiple times succeeds.
When resizing an appVM’s volume, the resulting size is slightly larger than 20000000 and re-running the extend command fails.
After playing a little with explicit sizes instead of shorthands: some error messages suggest that the sizes that are specified explicitly must be multiples of 512.
With that clue, I started using the 20Gi shorthand instead of 20G. (That’s 20x1024x1024 instead of 20x1000x1000 and is this guaranteed to always be a multiple of 512 — source code) So far, qvm-volume extend vm:private 20Gi is idempotent in all the cases I’ve tested.
Why the dispVM seemed to be fine with 20G, but not the appVM, I don’t know. I’m not even convinced it is relevant that one was an appVM and the other a dispVM. However, I’ll be using 20Gi until I understand exactly where the quirk comes from.
LVM implicitly rounds up to a multiple of 4 MiB, but the lvm_thin Qubes OS storage driver only becomes aware of this afterwards. That’s why if you resize an lvm_thin volume to a new size that isn’t already divisible by 4 MiB and then try to resize to the same size again, the second resize would initially be seen as an attempt to shrink.
That’s a different bug in lvm_thin: For snap_on_start volumes (e.g. the private volume of a DisposableVM, or the root volume of an AppVM) the driver never looks at the volume’s actual size, so it doesn’t even become aware that it has diverged from the nominal size. Basically that problem happens to be masking the other problem.
It has been a long standing but minor irritation that if I issue the command to set the size to 20G on a VM that that has already been done to, it fails (this happens in my salt stuff to the sys-cacher qube). It’s harmless because the qube is already at a good size, but all I had to do was change it to 20Gi.
[Meanwhile, I just realized my newer schema wasn’t even trying to do it at all because of a typo on my part. Again no issue as long as just “updating” an existing sys-cacher…but a new one would never have the size set, and I’ll bet that has something to do with the behavior I see on my other system (in essence the cacher doesn’t seem to be saving me any time whatsoever; that could be due to a lack of storage methinks), where I did regenerate the VM from scratch a few weeks ago.]