More infos, suggested here:
[user@dom0 ~]$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/luks-xxx qubes_dom0 lvm2 a-- 117.11g 23.18g
[user@dom0 ~]$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
qubes_dom0 1 172 0 wz--n- 117.11g 23.18g
[user@dom0 ~]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 85.13g 97.31 74.44
root qubes_dom0 Vwi-aotz-- 85.13g pool00 25.98
swap qubes_dom0 -wi-ao---- 7.41g
# other qubes omitted
[user@dom0 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/qubes_dom0-root 84G 21G 59G 26% /
There is a similar issue, which suggests
sudo lvextend -l +100%FREE /dev/mapper/qubes_dom0-pool00
I invoked qvm-backup
+ qvm-backup-restore --verify-only
and did not alter logical volumes, so not sure about this.
Does Qubes by default create swap
logical volume as seen in the third terminal output or is this a consequence of not enough RAM during backup run?
How might I reclaim disk space, that was available before backup started?