Backup restore gone wrong - TemplateVM kernel panics - empty LV

Hi,

I’ve backups my qubes (AppVMs and Templates VMs) from my machine running 4.2 and I’ve restored them on the machine, after having installed 4.3. As the migration article on the docs says.

The restored went well as the UI gaven me “Success”.

I tried to boot restored Standalone VMs, no issues.

After this, I tried to boot my templateVMs. And they kernel panic.
Below is the full log.

/var/log/xen/console/guest-debian13-minimal.log
[2025-12-18 16:27:37] [    0.714624] Run /init as init process
[2025-12-18 16:27:37] Qubes initramfs script here:
[2025-12-18 16:27:37] [    0.719735] Invalid max_queues (4), will use default max: 1.
[2025-12-18 16:27:37] [    0.725005] blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled; bounce buffer: enabled
[2025-12-18 16:27:37] [    0.727566] blkfront: xvdb: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled; bounce buffer: enabled
[2025-12-18 16:27:37] [    0.729467] blkfront: xvdc: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled; bounce buffer: enabled
[2025-12-18 16:27:37] [    0.733248] blkfront: xvdd: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled; bounce buffer: enabled
[2025-12-18 16:27:37] Waiting for /dev/xvda* devices...
[2025-12-18 16:27:37] gptfix: Invalid MBR signature: expected 0x55 0xAA, got 0x00 0x00
[2025-12-18 16:27:37] Qubes: Doing R/W setup for TemplateVM...
[2025-12-18 16:27:38] [    1.117078]  xvdc: xvdc1 xvdc3
[2025-12-18 16:27:38] Setting up swapspace version 1, size = 1073737728 bytes
[2025-12-18 16:27:38] UUID=f201f5b8-c1d1-4063-a36a-2564240590a8
[2025-12-18 16:27:38] Qubes: done.
[2025-12-18 16:27:38] mount: mounting /dev/mapper/dmroot on /sysroot failed: Invalid argument
[2025-12-18 16:27:38] Waiting for /dev/xvdd device...
[2025-12-18 16:27:38] [    1.126710] EXT4-fs (xvdd): mounting ext3 file system using the ext4 subsystem
[2025-12-18 16:27:38] [    1.128216] EXT4-fs (xvdd): mounted filesystem 307d2357-6997-479d-96ce-e59c9ab99190 ro with ordered data mode. Quota mode: none.
[2025-12-18 16:27:38] mount: mounting none on /sysroot/lib/modules failed: No such file or directory
[2025-12-18 16:27:38] [    1.139335] EXT4-fs (xvdd): unmounting filesystem 307d2357-6997-479d-96ce-e59c9ab99190.
[2025-12-18 16:27:38] mount: can't read '/proc/mounts': No such file or directory
[2025-12-18 16:27:38] BusyBox v1.36.1 (2024-07-17 00:00:00 UTC) multi-call binary.
[2025-12-18 16:27:38] 
[2025-12-18 16:27:38] Usage: switch_root [-c CONSOLE_DEV] NEW_ROOT NEW_INIT [ARGS]
[2025-12-18 16:27:38] 
[2025-12-18 16:27:38] Free initramfs and switch to another root fs:
[2025-12-18 16:27:38] chroot to NEW_ROOT, delete all in /, move NEW_ROOT to /,
[2025-12-18 16:27:38] execute NEW_INIT. PID must be 1. NEW_ROOT must be a mountpoint.
[2025-12-18 16:27:38] 
[2025-12-18 16:27:38] 	-c DEV	Reopen stdio to DEV after switch
[2025-12-18 16:27:38] [    1.146093] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
[2025-12-18 16:27:38] [    1.146116] CPU: 0 UID: 0 PID: 1 Comm: switch_root Not tainted 6.12.59-1.qubes.fc41.x86_64 #1
[2025-12-18 16:27:38] [    1.146141] Call Trace:
[2025-12-18 16:27:38] [    1.146151]  <TASK>
[2025-12-18 16:27:38] [    1.146160]  dump_stack_lvl+0x5d/0x80
[2025-12-18 16:27:38] [    1.146174]  panic+0x157/0x313
[2025-12-18 16:27:38] [    1.146189]  do_exit.cold+0x15/0x15
[2025-12-18 16:27:38] [    1.146202]  do_group_exit+0x30/0x80
[2025-12-18 16:27:38] [    1.146214]  __x64_sys_exit_group+0x18/0x20
[2025-12-18 16:27:38] [    1.146226]  x64_sys_call+0x14b4/0x14c0
[2025-12-18 16:27:38] [    1.146239]  do_syscall_64+0x82/0x160
[2025-12-18 16:27:38] [    1.146252]  ? arch_exit_to_user_mode_prepare.isra.0+0x7d/0x90
[2025-12-18 16:27:38] [    1.146271]  ? syscall_exit_to_user_mode+0x37/0x1d0
[2025-12-18 16:27:38] [    1.146286]  ? do_syscall_64+0x8e/0x160
[2025-12-18 16:27:38] [    1.146298]  ? file_tty_write.isra.0+0x8e/0xa0
[2025-12-18 16:27:38] [    1.146314]  ^C

Following another similar post (without fix), there was the question to check the LV.

  LV                                               VG         Attr       LSize   Pool      Origin                                           Data%  Meta%  Move Log Cpy%Sync Convert
  vm-debian-13-minimal-vpn-private                 qubes_dom0 Vwi-a-tz--   2.00g vm-pool   vm-debian-13-minimal-vpn-private-1766071658-back 0.00                                   
  vm-debian-13-minimal-vpn-private-1766071643-back qubes_dom0 Vwi-a-tz--   2.00g vm-pool                                                    0.00                                   
  vm-debian-13-minimal-vpn-private-1766071658-back qubes_dom0 Vwi-a-tz--   2.00g vm-pool   vm-debian-13-minimal-vpn-private-1766071643-back 0.00                                   
  vm-debian-13-minimal-vpn-root                    qubes_dom0 Vwi-a-tz--  10.00g vm-pool   vm-debian-13-minimal-vpn-root-1766071658-back    0.00                                   
  vm-debian-13-minimal-vpn-root-1766071643-back    qubes_dom0 Vwi-a-tz--  10.00g vm-pool                                                    0.00                                   
  vm-debian-13-minimal-vpn-root-1766071658-back    qubes_dom0 Vwi-a-tz--  10.00g vm-pool   vm-debian-13-minimal-vpn-root-1766071643-back    0.00  

As can be seen here, the LV of this TemplateVMs don’t have any data…

How can I solve this ? Is there any logs of the backup restore tool ?

Thanks for the input !

Upon suggestion of @fepitre, I tried to mount some of the “null” LV on dom0, and I got this error:

pierre@dom0:~$ sudo mount /dev/qubes_dom0/vm-debian-13-minimal-vpn-root /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/qubes_dom0-vm--debian--13--minimal--vpn--root, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.

I tried to run fsck on the LV, without success

pierre@dom0:~$ sudo fsck -fy /dev/qubes_dom0/vm-debian-13-minimal-vpn-root 
fsck from util-linux 2.40.4
e2fsck 1.47.1 (20-May-2024)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/mapper/qubes_dom0-vm--debian--13--minimal--vpn--root

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.

I deleted the faulty templatesVM and re-imported from the backup I had. So far, it worked with 2 template VMs.

Do you want to proceed? [y/N] y
qubesadmin.backup: -> Restoring debian-13-minimal-vpn...
qubesadmin.backup: -> Restoring dom0...
qubesadmin.backup: Extracting data: 3.5 GiB to restore
qubesadmin.backup: Restoring home of user 'pierre' to 'home-restore-2025-12-19-091257' directory...
qubesadmin.backup: -> Done.
qubesadmin.backup: -> Please install updates for all the restored templates.

@marmarek it’s weird that the GUI Backup Restore showed the restore being successful, while some LV are not properly restored.

Maybe add a filesystem check on the restored items ?

Can you check if qvm-backup-restore --verbose would give you more info, especially if something goes wrong?

I will try this on a new machine (I was upgrading my daily and needed it up asap)

Does the GUI tool writes logs somewhere ?

Unfortunately no.