How to run VMs that don't appear in Qubes Manager?

  • Have some big VMs on secondary storage.
  • Can see them in lvs.
  • Can see their pool in GUI under Disk Space Monitor.
  • Don’t see them in Qubes Manager.
  • Don’t see them under qvm-ls.
  • Can’t qvm-run them: get error “no such domain”.

How can I run them manually and make them appear in Qubes Manager again?

Did you have them in Qubes Manager before?
What did you do before they disappeared from there?

Yes, before upgraded to 4.1.

These ones on secondary storage are too big to backup and restore. Would take days to do.

Even with brand new NVME M.2 storage on both sides, going through USB3 port, it took more than a day just to backup and restore the 500GB i have on the main pool, which i can see.

It’s a 32GB RAM i7 machine, but Qubes Backup is so slow that i could probably upload and download that much data to the internet and back faster, if I new how.

It could be that Qubes 4.0 to Qubes 4.1 upgrade tool doesn’t support VMs on secondary storage pool. But it’s just a guess.
Maybe someone who has more knowledge on this will comment on this.

Maybe it’ll be faster if you disable backup compression.

Thanks. I didn’t use the upgrade tool; it’s a new install.

So you didn’t actually backup and restore them and just added the old pool from the secondary storage to new Qubes?
In that case I don’t know how to manually add them to your new Qubes.
I can just make a guess that you can try to create dummy qube with the same qube settings that your old qube had on the secondary pool then remove it’s logical volume and rename your old VMs logical volume with lvrename to the same name that newly created qube logical volume had.
All your qubes settings were stored in dom0 in your old Qubes system and they’re not in qube logical volume. So you need to remember what settings they had.

1 Like

Yes, that’s how i did.
I backed up what they said in this section:

There is no information on your qubes settings in /etc/qubes/ or in /etc/qubes-rpc/.
The guide assumed that you’ll backup and restore all your qubes with Qubes Backup/Restore tools.

Yes, i see that. Next is to figure out where those settings are.
I still have the old installation on the old NVME card. Maybe i can mount it somehow to figure it out.

If i can’t figure out which file of the old dom0 has these listed, or if it’s in some crazy database, like windows registry, i think i have to do something like this:

As @tzwcfq suggested, lvrename turned out to be the way to do it. Thank you!
I’ll have to write here a little guide about that when i finish this upgrade.

1 Like

So, here are my learned lessons on this:

  • BEFORE you migrate to the new OS version:
    • save your pvs output to record the UUID of secondary SSD.
    • record which qubes on your secondary storage are standalone and template based.
    • record their Virtualization type (PVH, HVM, etcetera)
  • Do this whole procedure when you are not tired, sober and can focus on this start to finish without distractions.
  • It will save you a lot of time, if you know Linux CLI editing shortcuts, such as Ctrl-K, Alt-D, Ctrl-A, Ctrl-E, Ctrl-arrows.
  • Use a CLI terminal in which you can copy/paste, such as xfce4. Default xterm doesn’t work for it.
  • If you are unsure, create a test qube in the same pool and practice renaming it.

It’s a stressful procedure, but with a guide, it would still be a whole day faster than backup/restoring large secondary storage qubes.
Detailed steps in next post.

Here, i will name as follows:

pool_SSD2 - thin pool on secondary SSD, where you invisible logical volumes with existing data are. In Qubes guide example, they call it “poolhd0”.
SSD2 - pool name as it will appear in Disk Space Monitor, under the hard drive icon in upper right corner of your Qubes desktop. In the guide, they call it “poolhd0_qubes”.
VG_SSD2 - volume group on secondary SSD.

So, before the OS migration, record the UUID of your secondary SSD by sending it to file you save outside the old dom0:
sudo pvs > pvs_backup.txt

In the file, will be something like this with numbers instead of XXXs:

PV VG Fmt Attr PSize PFree
/dev/mapper/luks-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX qubes_dom0 lvm2 a-- 952.85g <95.99g
/dev/mapper/sda2-luks-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX VG_SSD2 lvm2 a-- <930.51g 15.77g

Once in the new OS, add the UUID of the secondary SSD back into the /etc/crypttab file on the last line, in the same format as the one above for the primary SSD. It will look something like this with numbers instead of XXXs:

luks-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none discard
sda2-luks-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none

Reboot to unlock the secondary SSD encryption.

Then add the pool on the secondary SSD with the command as in the guide:
qvm-pool --add SSD2 lvm_thin -o volume_group=VG_SSD2,thin_pool=pool_SSD2,revisions_to_keep=2

Now your secondary SSD logical volumes will be visible to Qubes OS.

Next, to get them registered again properly with qubesd in dom0:

First, grep to be clear which domains are on secondary SDD that you will need to rename. Grep for the virtual group:
sudo lvs | grep VG_SSD2

You will see a listing for the whole pool. The logical volumes will be listed along with their revision backups in the leftmost column. Something like this:

vm-My_Standalone_qube-private
vm-My_Standalone_qube-private-1601010101-back
vm-My_Standalone_qube-private-1601010100-back
vm-My_Standalone_qube-root
vm-My_Standalone_qube-root-1601010101-back
vm-My_Standalone_qube-root-1601010100-back
vm-My_TemplateBased_qube-private
vm-My_TemplateBased_qube-1601010101-back
vm-My_TemplateBased_qube-1601010100-back

Now you are ready to create new LVs and rename them.
For the template-based qubes, the root LV is created by default in the dom0 pool on the primary SSD. So, it is not visible in the secondary SSD pool.

Create new blank qube for the standalone:
qvm-create --standalone --label red -P SSD2 My_NEW_Standalone_qube

For a template-based qube, would be:
qvm-create --template <your_template_name> --label red -P SSD2 My_NEW_TemplateBased_qube

Check to make sure which LVs you have to remove:

sudo lvs | grep VG_SSD2 | grep NEW

Next, to delete these NEW blank qubes’s logical volumes and their backup.

!!!Be careful!!! Don’t delete your existing data volumes by mistake.

sudo lvremove VG_SSD2/vm-My_NEW_Standalone_qube-private
sudo lvremove VG_SSD2/vm-My_NEW_Standalone_qube-private-1601010101-back
sudo lvremove VG_SSD2/vm-My_NEW_Standalone_qube-root
sudo lvremove VG_SSD2/vm-My_NEW_Standalone_qube-root-1601010101-back

Don’t try to start the new qube at this point just to see what happens. It will lead to qube hanging and being difficult to shut down.

Check to make sure you removed all of them. You should not see any listed:

sudo lvs | grep VG_SSD2 | grep NEW

Now, rename your old LVs to new:

sudo lvrename VG_SSD2 vm-My_Standalone_qube-private vm-My_NEW_Standalone_qube-private
sudo lvrename VG_SSD2 vm-My_Standalone_qube-private-1601010101-back vm-My_NEW_Standalone_qube-private-1601010101-back
sudo lvrename VG_SSD2 vm-My_Standalone_qube-private-1601010100-back vm-My_NEW_Standalone_qube-private-1601010100-back
sudo lvrename VG_SSD2 vm-My_Standalone_qube-root vm-My_NEW_Standalone_qube-root
sudo lvrename VG_SSD2 vm-My_Standalone_qube-root-1601010101-back vm-My_NEW_Standalone_qube-root-1601010101-back
sudo lvrename VG_SSD2 vm-My_Standalone_qube-root-1601010100-back vm-My_NEW_Standalone_qube-root-1601010100-back

Check to make sure you renamed all of them. You should not see any listed:

sudo lvs | grep VG_SSD2 | grep My_Stand

Now ready to start the qube from Qube Manager and rename it back to its original name, if you wish, and adjust whatever needed.

Repeat the procedure for template-based qubes, but you will not need to remove or rename their root logical volumes, because they are on the dom0 pool in the primary SSD.