Qubes doesn't boot after update :( please help!

Dear community,
i’m searching for a good soul that can help me.

In short:

  • There was an update (i think was about dom0, not sure)

  • I worked as usual, than I shut down the system (not sure if i suspended it)

  • Today it doesn’t boot :frowning:

  • I insert my disk password

  • After some long time this is what i get

Warning: /dev/mapper/qubes_dom0-root does not exist
Warning: /dev/qubes_dom0/root does not exist

Generating “/run/initramfs/rdsosreport.txt”

Entering emergency mode. Exit the shell to continue.
Type “journalctl” to view system logs.
You might wanti to save “/run/initramfs/rdsosreport.txt” to a USB stick or /boot
after mounting them and attach it to a bug report.

Press ENTER form maintenance
(or press Control-D to continue):

— [after Control-D]

Warning: Not all disks have been found.
Warning: You might want to regenerate your initramfs.

This is what i tried:

(1)
$ lvm lvscan
A get the list of my vm, everyhing is inactive except /dev/qubes_dom0/swap

(2)
I tried the recovery procedure using a live usb installation
It say that the linux partition was not found.
I tried fdisk -l from the console, and I see the partitions (phew)

(3) I tried several suggestions from other posts, but it seems that those refer for older versions of qubes. I use v4.1

I tried everything I could try using my very limited knowledge of linux systems, and i don’t know what to do.

My last backup was 2 weeks ago, but i have new data i really need to recover.

My priority is to access my user data from several of those VM
My best scenario is to have my system up and running again…

Please help :frowning:

------[ UPDATE 1 ]-------
I recovered my data, it was easier than expected :sweat_smile:
I used a live ubuntu USB to access the encrypted partition.

Now the most difficult part, I need to understand how to solve that boot error…

1 Like

Thanks, the data recovery part was easier than expected :sweat_smile:
I used a live ubuntu USB to access the encrypted partition.
Thank you for your kind reply, it was useful to understand how user data storage works.

Now I need to understand how I can recover the entire system

Did you copy the /run/initramfs/rdsosreport.txt to somewhere?

… and did you see qubes_dom0/root when you ran lvm lvscan?

I just tried to introduce an error in my grub config, to see how Qubes reacted (and how much nice information is saved in rdsosreport.txt).

Can you check that your kernel line looks like:

module2 /vmlinuz... root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=luks... rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap ...

?

1 Like
  1. Yes, i copied /run/initramfs/rdsosreport.txt
  2. Yes, I see qubes_dom0/root. Is “inactive”
  3. I really would like to answer your question about the kernel, but I’m afraid I need some command to run since my linux knowledge is really superficial :sweat_smile:

I found this, maybe it could be useful?

I’m not sure about that -ay parameter in vgchange command since I have v4.1.1

What do you think?

When you get the boot-menu, press e to edit the menu and scroll down to the line that starts with:

module2 /vmlinuz..

and check that is has rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap in the line. If so, press ctrl+x to boot and when you get the prompt for the password, press escape to get a text console. Enter the disk password and check that it says (something like):

Finished Cryptography Setup for luks-....
1 Like

Ok, I found rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap
Pressed ctrl+x and followed your instructions
I confirm I see “Finished Cryptography Setup for luks-…” in the console

Than after 'Reached target Basic System" a long pause
And a long list of "Warning: dracut-initqueue timeout - starting timeout scripts
And than the errors in my first post

In the prompt you get - can you try (from my memory):

lvm lvscan
  • is qubes_dom0/root set as ACTIVE? If not, can you try:
lvm lvchange -a y /dev/qubes_dom0/root
lvm lvscan

to see if it’s now set as ACTIVE? If it’s active, try to type exit

1 Like

qubes_dom0/root is inactive

I tried

lvm lvchange -a y /dev/qubes_dom0/root

I get

Check of pool qubes_dom0/root-pool failed (status:1). Manual repair required!

after

lvm lvscan

qubes_dom0/root is still inactive

The only ACTIVE item in that list is '/dev/qubes_dom0/swap

Looks like a good text to ask Google about … :slight_smile:

There has been a few posts in this forum lately about running out of space or similar - resulting in Qubes not booting. I’ve not paid enough attention to recommend any of the, from the top of my head. :frowning:

Since you could boot the machine and recover your data, I assume you can boot a live-ISO again and use the tools from the live-ISO to try and fix qubes_dom0/root (?)

The most similar post I found after some research is the one I posted before.
I tried again and now I see some result:

After

lvm
vgscan
vgchange -a y

I see every item ACTIVE (!!) except

qubes_dom0/root
qubes_dom0/root-pool

Now I tried:

lvconvert --repair qubes_dom0/root-pool

and I get

Repair of thin metadata volume of thin pool qubes_dom0/root-pool failed (status:-1). Manual repair required!

Then I tried the sam on root

lvconvert --repair qubes_dom0/root

and I get

Command not permitted on LV qubes_dom0/root

(it seems I tried a stupid thing :sweat_smile: )

Also I tried the recovery tools of the live USB… it says linux boot partition not found :face_with_raised_eyebrow:

dang – I had missed your link to Github :-/

I think the

lvconvert --repair qubes_dom0/root-pool
lvconvert --repair qubes_dom0/root

is the right direction … but I don’t know how to make it work. :frowning:

From the recovery shell, can you check for locking_type in /etc/lvm/*?

grep locking_ /etc/lvm/*

Altready did that, the type is 1.
Unfortunately I don’t have that Read-only locking type set error as that post…

I don’t what what it means when it says “Manual repair”…

DISCLAMER: I’m guessing and trying random suggestions from the web - so do sanity checks before running!

What do you get with:

lvconvert --repair -v qubes_dom0/root

and

lvchange -a y -v /dev/qubes_dom0/root

and

lvs -a

(assuming you run a live-ISO)?

Searching the forum, I find this:

looks like the same issue … and without a solution … :frowning:

Yeah, I saw that and other similar posts, but I cannot find a solution… :frowning:

I also tried your other suggestions, but I get the same error

chech of pool qubes_dom0/root-pool failed (status:1). Manualrepair required!

Hey community, it seems that there’s no solution to this problem, I searched everywhere and I’m not the only one with this scenario.

I gave up, took another disk and restored my backups.
I’ll keep this hd for a while if you want to help understand what happened…

I run into this. I succeeded with a Debian 12 live cd and the instructions from How to mount QUBES LUKS Disks · GitHub to mount it.
With most other systems I got errors about thin provisioning not being available and all volumes being inactive no matter what I tried. With the Debian system it just worked.