Restored qube from backup failing to start (Connection to the VM failed)

message given in pop-up

domain info has failed to start: qrexec-daemon startup failed: Connection to the VM failed

then VM info halts

Based on suggestion from here: Cannot Start VMs · Issue #2882 · QubesOS/qubes-issues · GitHub

I checked /var/log/xen/console/guest-info.log

which ends with:

Waiting for /dev/xvda* devices...
Qubes: Doing R/W setup for TemplateVM...
[    2.164669] random: sfdisk: uninitialized urandom read (4 bytes read)
[    2.171661]  xvdc: xvdc1 xvdc3
[    2.175723] random: mkswap: uninitialized urandom read (16 bytes read)
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=68eff9f7-277d-4d04-a169-e1d44465a9c2
Qubes: done.
mount: wrong fs type, bad option, bad superblock on /dev/xvda3,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
Waiting for /dev/xvdd device...
mount: /dev/xvdd is write-protected, mounting read-only
[    2.238160] EXT4-fs (xvdd): mounting ext3 file system using the ext4 subsystem
[    2.251747] EXT4-fs (xvdd): mounted filesystem with ordered data mode. Opts: (null)
mount: mount point /sysroot/lib/modules does not exist
mount: /sysroot not mounted or bad option

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
switch_root: failed to mount moving /sysroot to /: Invalid argument
switch_root: failed. Sorry.
[    2.287101] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
[    2.287181] CPU: 0 PID: 1 Comm: switch_root Not tainted 5.4.98-1.fc25.qubes.x86_64 #1
[    2.287244] Call Trace:
[    2.287276]  dump_stack+0x66/0x81
[    2.287308]  panic+0x109/0x2eb
[    2.287341]  do_exit+0xa2b/0xc30
[    2.287372]  ? __do_page_fault+0x2e6/0x520
[    2.287420]  do_group_exit+0x3a/0xa0
[    2.287474]  __x64_sys_exit_group+0x14/0x20
[    2.287508]  do_syscall_64+0x5b/0x1c0
[    2.287539]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[    2.287577] RIP: 0033:0x72140a87cbd8
[    2.287615] Code: 00 00 be 3c 00 00 00 eb 19 66 0f 1f 84 00 00 00 00 00 48 89 d7 89 f0 0f 05 48 3d 00 f0 ff ff 77 21 f4 48 89 d7 44 89 c0 0f 05 <48> 3d 00 f0 ff ff 76 e0 f7 d8 64 41 89 01 eb d8 0f 1f 84 00 00 00
[    2.287736] RSP: 002b:00007ffc03498e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[    2.287791] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 000072140a87cbd8
[    2.287843] RDX: 0000000000000001 RSI: 000000000000003c RDI: 0000000000000001
[    2.287895] RBP: 000072140ab6c860 R08: 00000000000000e7 R09: ffffffffffffff98
[    2.287951] R10: 0000000000000003 R11: 0000000000000246 R12: 000072140ab6c860
[    2.288006] R13: 000072140ab71c00 R14: 0000000000000000 R15: 0000000000000000
[    2.288265] Kernel Offset: disabled

Unfortunately, this seems to be my only backup of this qube. This is the only one from this backup that i’ve had troubles with so far. I am restoring my system with a new install from the latest 4.0.4 installer.

is this qube based in a template (pvm/hvm) ?

no, it’s a StandaloneVM

Are you using lvm right? Could you check if your qube was effectively restored? Try “sudo lvs” on dom0 and check Data% column for your vm.

Maybe the restore failed but the tool reported success like https://github.com/QubesOS/qubes-issues/issues/6236

You can also check the size reported by Qube Manager after reopening it.

sudo lvs → shows 15.67g for vm-info-private and 21.72g for vm-info-root

So that seems like the quantity of data is there if i’m understanding that correctly.

The errors point to some problem mounting the partition:

mount: wrong fs type, bad option, bad superblock on /dev/xvda3,
missing codepage or helper program, or other error

Try mounting the private image on another qube How to mount LVM images | Qubes OS so you can at least recover your data.

1 Like

trying to mount in dispvm gives:

mount: /media: wrong fs type, bad option, bad superblock on /dev/xvdi, missing codepage or helper program, or other error.

Seems like same thing as above?

I tried doing ‘private’ and the ‘root’ image using those instructions and both gave me that error when trying to mount.

edit: Tried on another vm that i can boot and getting this error too, so not sure what i’m doing wrong.

I suppose that your images filesystem is corrupted. Since there are some data there maybe you can try some ext4 repair tool e.g “fsck.ext4 /dev/xvdiX” , check if after attaching image you get some partitions “xvdi1, xvdi2, xvdi3…”

Maybe you can try to restore again from the backup and check the logs. Be sure that you have enough free space before starting the restoration.

Seem like it is corrupt. I used e2fsck on both the private and the main/third partition of the root. Both went through a long series of fixes with e2fsck. Then when i mounted them they were both empty.

@Donoban : Thanks for your tips. Definitely helped me investigate this further and determine that the qube is lost. I did try restoring one more time, but got the same results. Plus side is no irreplaceable data was lost, Down side it’ll take a bit of work to recreate the qube.

i am curious about one thing, when i do:

sudo lvs

I get vm-qubename-private and vm-qubename-root, but i also get a duplicate of each of those with a series of numbers and ‘-back’, like: vm-qubename-private-1615521528-back and vm-qubename-root-1615521528-back. And they are the same sizes respectively. Is that the same data stored twice? or just a different reference to the same data?

So you get a successful result but a unmountable image. I would post an issue at Issues · QubesOS/qubes-issues · GitHub , this a dangerous problem.

I guess that the data should be recoverable. Maybe this method Emergency backup recovery (v4) | Qubes OS works. Is rare that it restores many gygabytes of unreadable data without complain.

It is a snapshot, only saves differences. Each time you start a domain a new snapshot is created with a maximum of ‘revisions_to_keep’ in its qubes pool. (I think that default is three).

I could post a bug report there, however here is the order of what happened.

  1. I backed up most of my qubes before trying to get a RAID working.
  2. I inadvertently messed up my install working on the RAID setup. The system would usually boot, but clearly info was missing when I got in there as lots of things not working.
  3. I think this is when I backed up the qubes in question.
  4. I restored the qubes from 1 no problem and restored the qubes from 3 with no noticeable problems except for this one qube.

So I wonder if the problem was caused by me and this just happened to be the one qube that got corrupted in the process? And am hesitant to report as a bug. But I will try the Emergency Backup Recovery - that is something i tried in the past without success so will be interested to try it again.

1 Like

@Donoban : I tried the emergency restore. Everything went fine until tried to mount private.img - same results as trying to mount the private image that was restored by Qubes Restore Backup tool: “wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.”

One more piece of info on this.

I discovered another Qube I had restored from that same backup is also not working. I thought it was, because it booted and launched apps. But now i realize none of the data is there. This second one is an AppVM instead of a Standalone, hence the difference in behavior.

I do the same attempts to access the private image and it says it can’t be mounted. But it shows as if there is over 100GB of data in the image.

I have no idea. Could 100GB be real data or is it totally crazy?

Yes, the sizes are reflective of the size of the original Qubes. So i was hopeful the real data might be in there. I’m close to having rebuilt these two Qubes from scratch now though.