This issue is critical as one of my backups is corrupted, and the other is lost.
I have been using Qubes for a little while now, and am dealing with a critical issue. After entering FDE login info, the boot-bar loads very slowly before entering an dracut emergency prompt and displaying:
Warning: dracut-initqueue timeout - starting timeout scripts
Warning: dracut-initqueue timeout - starting timeout scripts
Warning: dracut-initqueue timeout - starting timeout scripts
Warning: Could not boot.
Warning: /dev/mapper/qubes_dom0-root does not exist
Warning: /dev/qubes_dom0/root does not exist
I have fixed this issue once in the past with the following steps:
- At the Dracut emergency prompt, I ran:
lvm
vgscan
vgchange -ay
This informed me that there was an issue with activating qubes_dom0/pool00.
At this point, I tried:
lvconvert --repair qubes_dom0/pool00
I received the following error:
Read-only locking type set. Write locks are prohibited.
Can’t get lock for qubes_dom0
cannot process volume group qubes_dom0
I exited back to the dracut prompt, and temporarily edited the /etc/lvm/lvm.conf file.
I found the following line:
locking_type=4
and changed it to:
locking_type=1
- I then retried the lvconvert command:
lvm
lvconvert --repair qubes_dom0/pool00
It worked! Recived a couple of warnings.
I then re-ran the following:
vgscan
vgchange -ay
I then exited back to the dracut prompt and edited the /etc/lvm/lvm.conf file, and changed the locking_type option back to what it was before:
locking_type=1
changed it back to:
locking_type=4
I then rebooted.
QubesOS came up.
For this second occurance of the issue, I attempted the previously used repair method, but received issues following step 2 , the errors displayed:
lvm> lvconvert --repair qubes_dom0/pool00
Using default stripesize 64.00 KiB.
WARNING: Sum of all thin volume sizes (1.10 TiB) exceeds the size of thin pools and the size of whole volume group (930.51 GiB)!
For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Transaction id 4764 from pool “qubes_dom0/pool00” does not match repaired transaction id 4763 from /dev/mapper/qubes_dom0-lvol0_pmspare.
WARNING: recovery of pools without pool metadata spare LV is not automated.
WARNING: If everything works, remove qubes_dom0/pool00_meta1 volume.
WARNING: Use pvmove command to move qubes_dom0/pool00_tmeta on the best fitting PV.
lvm>
(side note… after completing the repair method the first time the error occured, an black icon/connection in sys-usb appeared and became present in every session. The connection option was named something like: “qubes_dom0/pool00_metadata”)
I have already looked through plenty of github discussions and have not found anything pertaining to the volume error I am experiencing.
I have limited knowledge and experience with linux, however I can’t imagine I have 1.10 TiB worth of VMs/data in this installation of Qubes. I think theres something up with the metadata pool that got split into a USB option, but I could easily be off.
Please help me out if you can, I can’t lose the progress I have made and am nervous messing around in the emergency prompt.