Yup. You’re looking at the wrong place. (edit: that was the right place). More info in the docs:
But most users should’t not have to worry about that. Instead, they should use the Backup and Restoration tool to backup, import and export their Qubes:
I haven’t deleted anything - the only thing that happened is the notebook falling to the ground once. Is there any technique to recover data in a case like that?
If you don’t see them, it might be that they got lost. But maybe someone here knows how to recover LVM partitions. Otherwise backups (if you did them) may be your only salvation.
Also, what happens if you run qvm-volume info [vmname]:private ?
The most important thing is to secure your data (if you have anything
worth saving). If it’s all unimportant just reinstall and save yourself
hassle.
You can do this by mounting the -private volume in dom0 and copying out
any important data.
If you want to carry on, then try reinstalling a template. At the
minimum you want to be able to get a template up and running.
You can do this by copying the relevant rpm into dom0, and using dnf
to remove the existing template and rpm -i XXX to install a new.
try that, and confirm you can get a template up and running. If not,
then take note of the relevant error message and post it back.
If you do, then switch qubes to use the working template, and see if
they will start. (Begin with a qube that has no network access, so it
isn’t triggering sys-net etc.)
You could look for error messages in ‘dmesg’ or ‘journalctl’ output, but
also those flag fields ‘Vwi—tzp-’ in the output you posted indicates a
volume health problem. There should be an ‘a’ in the 5th position to
indicate its active (inactive is why the volumes aren’t showing up in
/dev/qubes_dom0). There should also be no ‘p’ in the 9th column. See
‘man lvs’ for details.
Since your pool00 volume is showing bad health, that’s a clue the
problem might be fixed by running:
You can try to make a volume active (mountable) manually with ‘lvchange
-ay qubes_dom0/volumename’. If it doesn’t work you should see an error
message explaining why.
Another repair option is to use ‘vgcfgrestore’ which is pretty well
documented on various Linux sites.
$ sudo lvconvert --repair qubes_dom0/pool00
Using default stripesize 64.00 KiB.
Insufficient free space: 17 extents needed, but only 0 available
And:
$ sudo lvscan -all
inactive '/dev/qubes_dom0/pool00' [2.03 TiB] inherit
ACTIVE '/dev/qubes_dom0/root' [2.03 TiB] inherit
ACTIVE '/dev/qubes_dom0/swap' [7.53 GiB] inherit
inactive '/dev/qubes_dom0/vm-whonix-ws-dvm-private' [2.00 GiB] inherit
inactive '/dev/qubes_dom0/vm-<various_other>' [X.XX GiB] inherit
[... over 100 other inactive items later ....]
ACTIVE '/dev/qubes_dom0/pool00_tmeta' [68.00 MiB] inherit
ACTIVE '/dev/qubes_dom0/pool00_tdata' [2.03 TiB] inherit
And:
$ sudo lvchange -ay qubes_dom0/vm-<some_name>-private
Refusing activation of partial LV qubes_dom0/vm-<some_name>-private. Use '--activationmode partial' to override.
$ sudo vgcfgrestore qubes_dom0
Consider using option --force to restore Volume Group qubes_dom0 with thin volumes.
Restore failed.
$ sudo vgcfgrestore qubes_dom0 --force
WARNING: Forced restore of Volume Group qubes_dom0 with thin volumes.
Cannot restore Volume Group qubes_dom0 with 1 PVs marked as missing.
Restore failed.
Could it be that no PV is actually missing, but 1 PV is erroneously marked as missing? (I know that the SSD works b/c I have uninstalled it and cloned it on another machine already.) And if so, how would one undo this “mark”?
Qubes has to install various file systems (btrfs, xfs, hammer?) to get rid of problems such as this. It does happen and the good news is not your boot. I have various other options for backups because I always have this type of problems and at one time there was a bug (vulnerability) in the restore option.
Suddenly, none of my qubes were able to start after booting QubesOS. Only dom0 was left working.
This apparently was the case because these qubes had been set to “inactive” (see $ sudo lvscan -all) by LVM. Activating them manually (using $ sudo lvchange -ay qubes_dom0) failed.
This apparently had happened because LVM had added the flag “m” (“MISSING”) to 1 of the PVs (LVM’s Physical Volumes). (See column “Attr” after $ sudo pvs.) But the PV was not really “missing”, it was just marked as such.
This might have happened due to a physical shock to the machine (I don’t see any other possible reason).
SOLUTION:
The PV had been wrongly marked as “missing” in LVM metadata file of the VG (Virtual Group).