Where is the Qubes' data stored?

AFAIK, all Qubes are stored at:

$ ls var/lib/qubes/appvms

Indeed, this brings up the complete list of my Qubes. However…

$ ls var/lib/qubes/appvms/some-important-qube

…shows only 2 items: firewall.xml and icon.png -> /usr/share/icons/hicolor/… (and it’s like this for the entire list of Qubes)

But where is the data?! Am I looking in the wrong place?

1 Like

Yup. You’re looking at the wrong place. (edit: that was the right place). More info in the docs:

But most users should’t not have to worry about that. Instead, they should use the Backup and Restoration tool to backup, import and export their Qubes:

Thanks, but it seems I WAS looking in the right place, since in the documentation you mention it says:

“Currently, the default driver is xen which is the default way of storing volume images as files in a directory tree like /var/lib/qubes/.”

…which I did (see previous post). I don’t see any image files in that tree…

So this means I’m fully F’ed now?

2 Likes

So all my data is gone now?

I haven’t deleted anything - the only thing that happened is the notebook falling to the ground once. Is there any technique to recover data in a case like that?

Sorry. My bad.

You can’t open AppVMs? Don’t you see all your VM’s volumes with lsblk (in dom0)?

1 Like

No Qubes will boot.

$ lsblk shows

sdb
–sdb1
–luks-07450b7c- …
–qubes_dom0-swap
–qubes_dom0-pool00_tdata
–qubes_dom0-pool00-tpool
–qubes_dom0-root
sr0
sda
–sda2
–luks-991b4d66-…
–qubes_dom0-pool00_tmeta
–qubes_dom0-pool00-tpool
–qubes_dom0-root
–qubes_dom0-pool00_tdata
–qubes_dom0-pool00-tpool
–qubes_dom0-root
–sda1

Should there be anything else?

Supposedly you would see there all your VM’s volumes like this.

    │   ├─qubes_dom0-vm--fedora--32--private             253:147  0     2G  0 lvm   
    │   ├─qubes_dom0-vm--debian--10--root--1598520654--back

If you don’t see them, it might be that they got lost. But maybe someone here knows how to recover LVM partitions. Otherwise backups (if you did them) may be your only salvation.

Also, what happens if you run qvm-volume info [vmname]:private ?

1 Like

I indeed do not see any such VM volumes in that list.

I don’t have any backups available.

$ qvm-volume info [vmname]:private

…returns:

pool lvm
vid qubes_dom0/vm-[vmname]-private
rw True
source
save_on_stop True
save_on_start False
size 57982058496
usage 0
revisions_to_keep 1
is_outdated False
Available revisions (for revert): none

I checked on a few VM and all of them had a value there – as in they have stuff in there. Where as yours doesn’t. :frowning:

:cry:

I don’t how else to help. Sorry

I appreciate your help, thanks!

1 Like

the important question is what happened before you lost your qubes?

qubes used to store qube data in /var/lib/qubes/

the default install is to now use lvm

what do you get for these 3 commands:

sudo pvs
sudo vgs
sudo lvs
1 Like

Well, the only “exceptional” thing that happened was my notebook dropping to the floor (I tripped over the power cable).

$ sudo pvs returns:

PV VG Fmt Attr PSize PFree
/dev/mapper/luks-07450b7c-1957-4133-bb5e-aaa34566f29e qubes_dom0 lvm2 a-m 238.47g 15.87g
/dev/mapper/luks-991b4d66-8a68-490c-bc47-2f47dcd5eba8 qubes_dom0 lvm2 a-- 1.82t 0

$ sudo vgs returns:

VG #PV #LV #SN Attr VSize VFree
qubes_dom0 2 124 0 wz-pn- 2.05t 15.87g

$ sudo lvs returns:

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi---tzp- 2.03t 21.98 [for Data%] 25.72 [for Meta%]
pool00_meta0 qubes_dom0 -wi-----p- 68.00m
root qubes_dom0 Vwi-aotzp- 2.03t pool00 2.45 [for Data%]
swap qubes_dom0 -wi-ao--p- 7.53g
vm-anon-whonix-private qubes_dom0 Vwi---tzp- 2.00g pool00 vm-anon-whonix-private-1596320759-back
vm-anon-whonix-private-1596320759-back qubes_dom0 Vwi---tzp- 2.00g pool00
vm-bionic-desktop-private qubes_dom0 Vwi---tzp- 2.00g pool00
...
(and a long list of other -private or -back or -root or -snap or -volatile items)

Do you have any recommended next steps?

1 Like

The most important thing is to secure your data (if you have anything
worth saving). If it’s all unimportant just reinstall and save yourself
hassle.
You can do this by mounting the -private volume in dom0 and copying out
any important data.

If you want to carry on, then try reinstalling a template. At the
minimum you want to be able to get a template up and running.
You can do this by copying the relevant rpm into dom0, and using dnf
to remove the existing template and rpm -i XXX to install a new.
try that, and confirm you can get a template up and running. If not,
then take note of the relevant error message and post it back.
If you do, then switch qubes to use the working template, and see if
they will start. (Begin with a qube that has no network access, so it
isn’t triggering sys-net etc.)

Keep us posted.

1 Like

You could look for error messages in ‘dmesg’ or ‘journalctl’ output, but
also those flag fields ‘Vwi—tzp-’ in the output you posted indicates a
volume health problem. There should be an ‘a’ in the 5th position to
indicate its active (inactive is why the volumes aren’t showing up in
/dev/qubes_dom0). There should also be no ‘p’ in the 9th column. See
‘man lvs’ for details.

Since your pool00 volume is showing bad health, that’s a clue the
problem might be fixed by running:

pvck /dev/mapper/luks-
pvscan /dev/mapper/luks-
vgck qubes_dom0
vgscan --mknodes qubes_dom0
lvconvert --repair qubes_dom0/pool00
lvscan --all

You can try to make a volume active (mountable) manually with ‘lvchange
-ay qubes_dom0/volumename’. If it doesn’t work you should see an error
message explaining why.

Another repair option is to use ‘vgcfgrestore’ which is pretty well
documented on various Linux sites.

1 Like

Thank you. Here’s the gist of the commands with notable results:

$ sudo vgck qubes_dom0
The volume group is missing 1 physical volumes.

Physical volumes:

$ sudo pvscan
PV /dev/mapper/luks-<UUID_1>   VG qubes_dom0     lvm2 [1.82 TiB / 0     free]
PV /dev/mapper/luks-<UUID_2>   VG qubes_dom0     lvm2 [238.47 GiB / 15.87 GiB free]

$ sudo pvs
PV                                                    VG         Fmt  Attr PSize   PFree
/dev/mapper/luks-07450b7c-1957-4133-bb5e-aaa34566f29e qubes_dom0 lvm2 a-m  238.47g 15.87g
/dev/mapper/luks-991b4d66-8a68-490c-bc47-2f47dcd5eba8 qubes_dom0 lvm2 a--  1.82t   0

(Note the 1st having the “m” flag for “missing”.)

Then:

$ sudo lvconvert --repair qubes_dom0/pool00
Using default stripesize 64.00 KiB.
Insufficient free space: 17 extents needed, but only 0 available

And:

$ sudo lvscan -all
inactive          '/dev/qubes_dom0/pool00' [2.03 TiB] inherit
ACTIVE            '/dev/qubes_dom0/root' [2.03 TiB] inherit
ACTIVE            '/dev/qubes_dom0/swap' [7.53 GiB] inherit
inactive          '/dev/qubes_dom0/vm-whonix-ws-dvm-private' [2.00 GiB] inherit
inactive          '/dev/qubes_dom0/vm-<various_other>' [X.XX GiB] inherit

[... over 100 other inactive items later ....]

ACTIVE            '/dev/qubes_dom0/pool00_tmeta' [68.00 MiB] inherit
ACTIVE            '/dev/qubes_dom0/pool00_tdata' [2.03 TiB] inherit

And:

$ sudo lvchange -ay qubes_dom0/vm-<some_name>-private
Refusing activation of partial LV qubes_dom0/vm-<some_name>-private.  Use '--activationmode partial' to override.

$ sudo vgcfgrestore qubes_dom0
Consider using option --force to restore Volume Group qubes_dom0 with thin volumes.
Restore failed.

$ sudo vgcfgrestore qubes_dom0 --force
WARNING: Forced restore of Volume Group qubes_dom0 with thin volumes.
Cannot restore Volume Group qubes_dom0 with 1 PVs marked as missing.
Restore failed.

Could it be that no PV is actually missing, but 1 PV is erroneously marked as missing? (I know that the SSD works b/c I have uninstalled it and cloned it on another machine already.) And if so, how would one undo this “mark”?

Qubes has to install various file systems (btrfs, xfs, hammer?) to get rid of problems such as this. It does happen and the good news is not your boot. I have various other options for backups because I always have this type of problems and at one time there was a bug (vulnerability) in the restore option.

SOLVED!

Here’s the summary of what has happened:

  1. Suddenly, none of my qubes were able to start after booting QubesOS. Only dom0 was left working.

  2. This apparently was the case because these qubes had been set to “inactive” (see $ sudo lvscan -all) by LVM. Activating them manually (using $ sudo lvchange -ay qubes_dom0) failed.

  3. This apparently had happened because LVM had added the flag “m” (“MISSING”) to 1 of the PVs (LVM’s Physical Volumes). (See column “Attr” after $ sudo pvs.) But the PV was not really “missing”, it was just marked as such.

  4. This might have happened due to a physical shock to the machine (I don’t see any other possible reason).

SOLUTION:

  1. The PV had been wrongly marked as “missing” in LVM metadata file of the VG (Virtual Group).

  2. Manually create a backup with vgcfgbackup to ‘/etc/lvm/backup/’ (by default), open it with a text editor and delete the erroneous “MISSING” flag, save it. (See page 19 of the PDF at: https://mbroz.fedorapeople.org/talks/LinuxAlt2009_2/lvmrecovery.pdf)

  3. Load the corrected metadata file using ‘vgcfgrestore’.

  4. Use $ sudo lvchange -ay qubes_dom0 to re-activate all qubes.

  5. Reboot and find your qubes working again.

Thanks to all the kind souls for helping, giving pointers and advice!

3 Likes