Secondary storage issues for maximum redundancy

On my old laptop with 120Gb SSD and 2Tb HDD, at some point, I had to replace the ssd, and at another point the HDD.
My new laptop I selected with redundancy in mind, with 3 SSD’s. I installed Qubes 4.1 on all 3, and share the pool with the rest.

What works:
[user@dom0 ~]$ sudo lvs

  LV                                                         VG        Attr       LSize    Pool       Origin                                           Data%  Meta%  Move Log Cpy%Sync Convert
  Dom0-0                                                     VolGroup0 Vwi-aotz--   20.00g root-pool0                                                  20.28                                  
  root-pool0                                                 VolGroup0 twi-aotz--   20.00g                                                             20.28  18.70                           
  swap                                                       VolGroup0 -wi-ao----    4.00g
  vm-pool0                                                   VolGroup0 twi-aotz--   <3.30t                                                             3.86   2.93                            
# with all VM's located in vm-pool0
  Dom2-0                                                     VolGroup2 Vwi-a-tz--   20.00g root-pool2                                                  19.62                                  
  root-pool2                                                 VolGroup2 twi-aotz--   20.00g                                                             19.62  16.41                           
  swap                                                       VolGroup2 -wi-a-----    4.00g                                                                                                    
  vm-pool2                                                   VolGroup2 twi-aotz--    6.52t                                                             2.98   2.89                            
 # with some of the VM's located in vm-pool2

The idea is that when SSD1 fails, I boot SSD2 (not plugged in yet), and when both SSD1 and SSD2 fail, I boot SSD3. Put the backups that were on the failed disk(s) on the current SSD, so I can continue with minimal down-time.
What also works: Booted from Dom0-0 (SSD1), I can create VM’s on vm-pool2, and run them without any problems.

What does not work:
The VM’s that already live on vm-pool2, created while booted in Dom2-0, do not run when booted in Dom0-0
And viva-versa VM’s created on vm-pool2 while booted in Dom0-0, do not work when booted in Dom2-0

Qube manager, nor start menu, does not list the qubes created from the other Dom
However, if I do a sudo lvs booted from either Dom, I can see all the VM’s created from either Dom.

vm-pool2 has:

  vm-standalone-SSD2-private			# created from Dom2-0
  vm-standalone-testqube-private		# created from Dom0-0
  vm-test-private						# created from Dom2-0
  vm-z3-sys-net-fw-usb-dvm-private		# created from Dom2-0
  vm-z3-templ-sys-net-fw-usb-private	# created from Dom2-0

The idea is that vm-pool2 has z3-sys-usb (DispVM), z3-sys-net-fw-usb-dvm and z3-templ-sys-net-fw-usb so that I will be able to mount an external USB HDD with backups. <= I added ‘z3-’ to make sure the name of the qubes in pool0, pool1 and pool2 would not have the same name.
Booted from Dom2-0, qube manager/start menu can see standalone-SSD2, test, z3-sys-usb, z3-sys-net-fw-usb-dvm, z3-templ-sys-net-fw-usb
bot not standalone-testqube

Booted from Dom0-0, qube manager/start menu can see standalone-testqube
bot not standalone-SSD2, test, z3-sys-usb, z3-sys-net-fw-usb-dvm, z3-templ-sys-net-fw-usb

Booted from Dom0-0:

[user@dom0 ~]$ qvm-run test xterm &
[2] 13141
[user@dom0 ~]$ usage: qvm-run [--verbose] [--quiet] [--help] [--user USER] [--autostart] [--no-autostart] [--pass-io] [--localcmd COMMAND] [--gui] [--no-gui] [--colour-output COLOUR] [--colour-stderr COLOUR] [--no-colour-output] [--no-colour-stderr] [--filter-escape-chars] [--no-filter-escape-chars] [--service] [--no-shell] [--dispvm [BASE_APPVM]] [--all] [--exclude EXCLUDE] [VMNAME] COMMAND [ARG [ARG ...]]
qvm-run: error: no such domain: 'test'
[2]+  Exit 2                  qvm-run test xterm
[user@dom0 ~]$ qvm-run standalone-testqube xterm &
[2] 13153
[user@dom0 ~]$ Running 'xterm' on standalone-testqube
[2]+  Done                    qvm-run standalone-testqube xterm
[user@dom0 ~]$

Booted from Dom2-0:

[user@dom0 ~]$ qvm-run test xterm
Running 'xterm' on test

[user@dom0 ~]$ qvm-run standalone-testqube xterm
usage: qvm-run [--verbose] [--quiet] [--help] [--user USER] [--autostart] [--no-autostart] [--pass-io] [--localcmd COMMAND] [--gui] [--no-gui] [--colour-output COLOUR] [--colour-stderr COLOUR] [--no-colour-output] [--no-colour-stderr] [--filter-escape-chars] [--no-filter-escape-chars] [--service] [--no-shell] [--dispvm [BASE_APPVM]] [--all] [--exclude EXCLUDE] [VMNAME] COMMAND [ARG [ARG ...]]
qvm-run: error: no such domain: 'standalone-testqube'
[user@dom0 ~]$

I like the idea to be able to hide some VM’s from the start menu, but not all of them.

How do I make sure all qubes on pool2 are accessible and usable booted from all Dom0’s? (Dom0-0, Dom2-0 and later on also Dom1-0)

1 Like

My opinion is that you are overthinking this. The amount of time put to achieve your goal (both with set up and finding solutions) by far exceeds the time needed to restore backup to a secondary SSD when your SSD1 fails. If I were you, I’d use this very moment and time to fully enjoy Qubes in action - daily driver.
But others will have different opinions, for sure.