Critical Boot Issue Please Help

This issue is critical as one of my backups is corrupted, and the other is lost.

I have been using Qubes for a little while now, and am dealing with a critical issue. After entering FDE login info, the boot-bar loads very slowly before entering an dracut emergency prompt and displaying:

Warning: dracut-initqueue timeout - starting timeout scripts

Warning: dracut-initqueue timeout - starting timeout scripts

Warning: dracut-initqueue timeout - starting timeout scripts

Warning: Could not boot.

Warning: /dev/mapper/qubes_dom0-root does not exist

Warning: /dev/qubes_dom0/root does not exist

I have fixed this issue once in the past with the following steps:

  1. At the Dracut emergency prompt, I ran:

lvm

vgscan

vgchange -ay

This informed me that there was an issue with activating qubes_dom0/pool00.

At this point, I tried:

lvconvert --repair qubes_dom0/pool00

I received the following error:

Read-only locking type set. Write locks are prohibited.

Can’t get lock for qubes_dom0

cannot process volume group qubes_dom0

I exited back to the dracut prompt, and temporarily edited the /etc/lvm/lvm.conf file.

I found the following line:

locking_type=4

and changed it to:

locking_type=1

  1. I then retried the lvconvert command:

lvm

lvconvert --repair qubes_dom0/pool00

It worked! Recived a couple of warnings.

I then re-ran the following:

vgscan

vgchange -ay

I then exited back to the dracut prompt and edited the /etc/lvm/lvm.conf file, and changed the locking_type option back to what it was before:

locking_type=1

changed it back to:

locking_type=4

I then rebooted.

QubesOS came up.

For this second occurance of the issue, I attempted the previously used repair method, but received issues following step 2 , the errors displayed:

lvm> lvconvert --repair qubes_dom0/pool00

Using default stripesize 64.00 KiB.

WARNING: Sum of all thin volume sizes (1.10 TiB) exceeds the size of thin pools and the size of whole volume group (930.51 GiB)!

For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.

Transaction id 4764 from pool “qubes_dom0/pool00” does not match repaired transaction id 4763 from /dev/mapper/qubes_dom0-lvol0_pmspare.

WARNING: recovery of pools without pool metadata spare LV is not automated.

WARNING: If everything works, remove qubes_dom0/pool00_meta1 volume.

WARNING: Use pvmove command to move qubes_dom0/pool00_tmeta on the best fitting PV.

lvm>

(side note… after completing the repair method the first time the error occured, an black icon/connection in sys-usb appeared and became present in every session. The connection option was named something like: “qubes_dom0/pool00_metadata”)

I have already looked through plenty of github discussions and have not found anything pertaining to the volume error I am experiencing.

I have limited knowledge and experience with linux, however I can’t imagine I have 1.10 TiB worth of VMs/data in this installation of Qubes. I think theres something up with the metadata pool that got split into a USB option, but I could easily be off.

Please help me out if you can, I can’t lose the progress I have made and am nervous messing around in the emergency prompt.

If this has happened before, I would be suspicious of a failing disk.

I can’t help with the repair, but you may want to try recovering your
files using the process outlined here:

Unfortunatly this will not help me.

What volumes related to pool00 do you see on lvm lvs -a? there should be pool00_tmeta, pool00_tdata and possibly lvol0_pmspare. If there is some extra like pool00_meta1, especially not hidden one (without square brackets), remove it before attempting repair.

1 Like

I had a series of events which lead to the same error.

/dev/mapper/qubes_dom0-root does not exist

Stuck at the #dracut prompt and unable to boot into Qubes

I tried mounting the the disk using the below link - that failed with errors.

In the midst of mounting (via above) I saw the names of all my VMs until…
“Check of pool qubes_dom0/pool100 failed (status:1) Manual repair required!”

I am a long time BTC donor to Qubes but I am stuck. Last back up 3 months ago with some serious losses. I need to know the risks of typing in all the commands that DDev1 did.
Getting this disk back is important.

Where can I post the details of my particular crash or get some one on one to save the disk?
New to asking for help after 5+ years of flawless use of Qubes-OS - command-line illiterate here.

Thanks in advance.

Hi DDev1

I was trying to repeat the steps you outline in order to get back into Qubes.

You wrote:

Read-only locking type set. Write locks are prohibited.
Can’t get lock for qubes_dom0
cannot process volume group qubes_dom0
I exited back to the dracut prompt, and temporarily edited the /etc/lvm/lvm.conf file.

I got the same “Can’t get lock” error and would like to edit the file you mentioned but I don’t have nano, vim or any other editor while stuck at the /#dracut prompt. Could yo lay out how you edited the lvm.conf file?

I would also be curious if you ever got back in and what the problem ended up being.

glad I ran into your post.
Thanks