I’ve read through this similar forum thread about VMs that disappeared. The user’s VMs’ logical volumes were somehow deactivated and thus wouldn’t boot, though dom0 was fully accessible.
I seem to have a similar issue.
Output of lvs
:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root qubes_dom0 Vwi-aotz-- 20.00g root-pool 49.85
root-pool . twi-aotz-- 20.00g 49.85 29.62
swap . -wi-ao---- 3.75g
vm-1-private . Vwi---tz-- 2.00g vm-pool ...
vm-1-root . Vwi---tz-- 10.00g vm-pool ...
vm-1-date-back . Vwi---tz-- 10.00g vm-pool
vm-2-private . Vwi---tz-- 10.00g vm-pool ...
...
vm-pool . twi---tz-- 339.55g
...
vm-n-date-back . Vwi---tz-- 2.00g vm-pool
Output of pvs
:
PV VG Fmt Attr PSize PFree
/dev/mapper/luks-... qubes_dom0 lvm2 a-- 455.24g 90.84g
Output of vgs
:
VG #PV #LV #SN Attr VSize VFree
qubes_dom0 1 216 0 wz--n- 455.25g 90.84g
Output of vgscan
:
Found volume group "qubes_dom0" using metadata type lvm2
Output of lvscan --all
:
inactive '/dev/qubes_dom0/vm-pool' [339.55 GiB] inherit
ACTIVE '/dev/qubes_dom0/root-pool' [20.00 GiB] inherit
ACTIVE '/dev/qubes_dom0/root' [20.00 GiB] inherit
ACTIVE '/dev/qubes_dom0/swap' [3.75 GiB] inherit
inactive '/dev/qubes_dom0/vm-...-{private,private-snap,date-back,root,root-snap,volatile,}' [XX.00GiB] inherit
...
inactive '/dev/qubes_dom0/vm-...-{private,private-snap,date-back,root,root-snap,volatile,}' [XX.00GiB] inherit
ACTIVE '/dev/qubes_dom0/root-pool_tmeta' [24.00 MiB] inherit
ACTIVE '/dev/qubes_dom0/root-pool_tdata' [20.00 GiB] inherit
Seeing as my qubes’ thin volumes were inactive, and seemingly not missing (as they were in the linked thread), I tried to activate a private volume using the command given in the thread.
Output of lvchange -a y /dev/qubes_dom0/vm-1-private
:
Thin pool qubes_dom0-vm--pool-tpool (253:9) transaction_id is 39358, while expected 39360.
I made a backup of my volume group configs using vgcfgbackup
and examined the file for transaction IDs. Interestingly, the transaction id for the vm-pool
logical volume is 39358
, and the last transaction id to apparenty occur for a VM (succeeded by transaction IDs for lvol0_pmspare
, vm-pool_tmeta
, vm-pool_tdata
, root-pool_tmeta
, and root-pool_tdata
) is 39357
...
qubes_dom0 {
id ...
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
physical_volumes {
pv0 { ...
}
}
logical_volumes {
vm-pool {
id ...
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time ...
creation_host ...
segment_count = 1
segment1 {
...
type = "thin-pool"
metadata = "thin-pool_tmeta"
pool = "vm-pool_tdata"
# Look here transaction_id = 39358
... }
}
...
vm-1-private-snap {
id ...
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time ...
creation_host ...
segment_count = 1
segment1 {
...
type = "thin"
thin_pool = "vm-pool"
# Look here transaction_id = 39357
... }
}
}
Maybe something is wrong with the order of my transaction IDs? As noted above, lvchange
expected an ID of 39360
, but the metadata backup only goes as high as 39358
(in the 39xxx
range).
I should add that I tried to add some space for metadata before editing my lvm.conf
by following the instructions linked in this forum comment. Based on my ~/.bash_history
, I ran these commands:
swapoff -a
lvresize -L -200M qubes_dom0/swap
swapon -a
swapoff -a
swapon -a
mkswap /dev/qubes_dom0/swap
swapon -a
lvextend --poolmetadatasize +200M qubes_dom0/vm-pool