Upgrade to Qubes 4.2 - Clean Installation - Failed Restoration of VMs

Context: Trying to migrate:

  • from my old PC (Qubes R 4.1.2)
  • to my new PC (Ques R 4.2.4 - Clean Installation)

Steps on my old PC:
I followed https://www.qubes-os.org/doc/upgrade/4.2/, meaning I:

  1. ran the updater

  2. installd the qubes-dist-upgrade tool (sudo qubes-dom0-update -y qubes-dist-upgrade)

  3. typed this command in a dom0 terminal: qubes-dist-upgrade --template-standalone-upgrade and got:

    [user@dom0 ~]$ sudo qubes-dist-upgrade --template-standalone-upgrade
    INFO: Please wait while running pre-checks...
    usage: qvm-check [--verbose] [--quiet] [--help] [--all] [--exclude EXCLUDE] [--running] [--paused] [--template] [--networked] [VMNAME [VMNAME ...]]
    qvm-check: error: no such domain: 'CUST0001'
    WARNING: /!\ MAKE SURE YOU HAVE MADE A BACKUP OF ALL YOUR VMs AND dom0 DATA /!\
    -> Launch upgrade process? [y/N] y
    ERROR: Cannot continue to STAGE 4, dom0 is not R4.2 yet. Any error happened in previous stages?
    [user@dom0 ~]$ 
    

    NB: I have no idea what ‘CUST0001’ is.

    I did nothing more here.

  4. backed up my VMs

Steps on my old PC:
I:

  • installed Qubes R 4.2.4

  • hit the Restore button on a Qube Manager window and gave the VMs to be restored

  • got this log on the restoration window:

    • Name of the VMs to be restored:

      -> Restoring <VM-name-1>...
      -> Restoring <VM-name-2>...
      -> Restoring <VM-name-3>...
      ...
      
    • Then, error messages due to eth card not being pci:03_00.0 on the new PC ==> Fine:

      Error attaching device pci:03_00.0 to <VM-on-old-PC-where-eth0-was-attached>: Invalid PCI device: 03_00.0
      Error attaching device pci:03_00.0 to <other-VM-on-old-PC-where-eth0-was-attached>: Invalid PCI device: 03_00.0
      
    • Then, the amount of data to be restore:

      Extracting data: 289.2 GiB to restore
      
    • Then, after a bit of time, the same sequence of error messages for many VMs to be restored:

      Failed to restore volume root (size 21474836480) of VM <VM-NAME-X>: Cannot create new thin volume, free space in thin pool qubes_dom0/vm-pool reached threshold.
      Error while processing '/home/user/QubesIncoming/backup#restore-wmmuzuqu/vm51/root.img.000': 
      ERROR: unable to extract files for /home/user/QubesIncoming/backup#restore-wmmuzuqu/vm51/root.img.000, tar output: 
      Failed to restore volume private (size 5368709120) of VM <VM-NAME-X>: Cannot create new thin volume, free space in thin pool qubes_dom0/vm-pool reached threshold.
      Error while processing '/home/user/QubesIncoming/backup#restore-wmmuzuqu/vm51/private.img.000': 
      ERROR: unable to extract files for /home/user/QubesIncoming/backup#restore-wmmuzuqu/vm51/private.img.000, tar output: 
      
    • Then, the closing:

      Restoring home of user 'user' to 'home-restore-2025-06-17-191920' directory...
      -> Done.
      -> Please install updates for all the restored templates.
      Finished successfully!
      

Additional info:

  1. On my new PC, I dedicated 900 GiB to Qubes, which is more than on my old PC

  2. df -h on dom0, after the (unsuccessfull) installation:

    Filesystem                   Size  Used Avail Use% Mounted on
    devtmpfs                     4.0M     0  4.0M   0% /dev
    tmpfs                        2.0G   24K  2.0G   1% /dev/shm
    tmpfs                        779M  2.3M  777M   1% /run
    efivarfs                      64K   34K   25K  58% /sys/firmware/efi/efivars
    /dev/mapper/qubes_dom0-root  885G  6.8G  834G   1% /
    tmpfs                        2.0G   44K  2.0G   1% /tmp
    /dev/nvme1n1p2               974M  138M  769M  16% /boot
    /dev/nvme1n1p1               599M   32M  568M   6% /boot/efi
    tmpfs                        390M  112K  390M   1% /run/user/1000
    
  3. Afterwards, I installed Qubes R 4.2.4 again from scratch and I tried to restore only 2 VMs (a VM and it’s template). I did not get any error message and the restoration of both VMs looks OK at first sight.

  4. Afterwards, I went on and tried to restore 15 VMs at the same time ==> I got the error messages

    During that restoration, every time I looked, /dev/mapper/qubes_dom0-root’s use was always <= 5% (obviously, I was not constantly watching)

  5. New PC is Qubes certified

Questions:

  • Why all those error messages when I try to restore all of the VMs at the same time
  • What should I do to be able to restore all of my VMs in one batch?
1 Like

I started over with a new R4.2.4 installation on the new laptop.

Just after a brand new installation, I have on dom0:

[user@dom0 ~]$ sudo lvs | grep "twi"
  root-pool           qubes_dom0 twi-aotz-- 900.00g      2.30   1.48
  vm-pool             qubes_dom0 twi-aotz-- <89.81g      18.30  16.77
[user@dom0 ~]$ 

On my old laptop (with many VMs), I currently have:

[user@dom0 ~]$ sudo lvs | grep "twi"
  root-pool           qubes_dom0 twi-aotz--  49.77g      18.44  17.69 
  vm-pool             qubes_dom0 twi-aotz-- 664.62g      72.15  55.21                         
[user@dom0 ~]$ 

When I restore one VM at a time on the new laptop, I have no problem with the first ones. But after 10 VMs or so, I get the same error message as always: vm-pool reached threshold. Indeed, when I click on the hard drive icon in the system tray, I see that vm-pool is at >90%.

Questions:

  • Is it vm-pool that should contain all VMs? If so, how come vm-pool is so small after a new install on my new laptop?

  • During the install, I requested 900 GiB on / so I understand where that value come from. But where does “<89.81g” come from?

  • During the install, there is a “Advanced Configuration” section that propose:

    • “Use existing LVM thin pool” instead of the default
    • “Create ‘vm-pool’ LVM thin pool”.

    Is this part of my problem? Should I go for an advanced non-default configuration? Looks odd.

  • What should I do to restore more than ten (or so) VMs on the new laptop?

Thanks!

1 Like

Very late reply:
The default install on 4.2.4 creates a small root-pool. with the
remainder of the space given over to vm-pool.
This is what you had on the old laptop.
I cant account for what you see on the new installation -this is not
the default, so it must be the result of some optiosnthat you selected
at install. Without knowing how the disk was partitioned before, and
what options you selected, I cant help further.

But perhaps you have solved this for yourself.