yesterday I started a larger download over night, today I noticed the laptop is off because I forgot to plug in the charging cable. After booting the device I end up in dracut emergency, similar to Qubes_dom0-root does not exist
lvm lvscan just shows no output. “ls /dev/mapper” does only show “control”. /dev/nvme0n1 exists with 3 partitions.
Maybe the same issue as here:
opened 11:25PM - 12 Nov 23 UTC
T: bug
P: default
needs diagnosis
C: storage
affects-4.1
As requested [here](https://github.com/QubesOS/qubes-issues/issues/5973#issuecom… ment-1806922191), I am opening a new issue for this bug.
### Qubes OS release
R4.1.2
### Affected component(s) or functionality
Entire system
### Brief summary
After months of using Qubes, system is unable to start after disk decryption.
### Steps to reproduce
Unknown
Uneducated guess to reproduce the bug (based on related previously reported issues): run out of disk space
### Expected behavior
Normal system startup, with the login screen appearing after disk decryption.
### Actual behavior
The following error screen appears after I enter the password to decrypt the disk:

Here is the generated [rdsosreport.txt](https://github.com/QubesOS/qubes-issues/files/13329223/rdsosreport.txt)
If I press Control-D to continue, this is what I get:

### Additional context
- My keyboard strangely stopped working out of nowhere the session before the bug happened;
- Still in the session before and in other previous (but recent) instances, system prompted a warning message stating that I was running out of space. I don't recall exactly, but at the time I assumed that the space it was referring to was one of my VMs, since that warning message only appeared after starting that VM, so I simply increased the space allocated to it in its settings;
- After this bug started happening, the decryption process is taking a much longer time than before just to take me to that error screen. However, one thing I have certainly noticed is that the time to decrypt the disk has gradually become longer over time since Qubes fresh install.
- I managed to recover the files that were in the VMs to an external drive by booting from a live usb Linux Mint image. After starting Mint, the drive in which Qubes is installed appeared as available. I just double clicked it, entered the password to decrypt the disk, and voila, the VMs showed up.
### Solutions you've tried
After inspecting rdsosreport.txt (journalctl), I noticed a familiar “Manual repair required!” warning, something common in the other reported cases:
> [ 40.070355] dom0 dracut-initqueue[1111]: Check of pool qubes_dom0/root-pool failed (status:1). Manual repair required!
[ 181.207297] dom0 dracut-initqueue[554]: Warning: dracut-initqueue timeout - starting timeout scripts
So I tried to follow the solutions suggested in these other cases.
In the maintenance/emergency shell, I tried running the command `lmv lvconvert --repair qubes_dom0/root-pool`, which resulted in the same “Manual repair required!” error:

Following the assumptions that this problem was related to the lack of disk space, I deleted one of my VMs that had a large space allocated using the `lvremove` command. I was successful in removing the VMs, but the problem persisted.
One of the helpful community members theorized that I ran out of space in dom0 (root-pool) and suggested me to free up the space in it by extending the volume. So I tried to extend it, but I was unsuccessful.

I also misinterpreted one of his instructions, and ended up trying to extend the vm-pool as well, despite its lack of correlation with the root-pool. Still, I came across this rather questionable warning message that might be worth mentioning:

### Related, [non-duplicate](https://www.qubes-os.org/doc/reporting-bugs/#new-issues-should-not-be-duplicates-of-existing-issues) issues
https://github.com/QubesOS/qubes-issues/issues/5372
https://github.com/QubesOS/qubes-issues/issues/5973
https://www.reddit.com/r/Qubes/comments/fszsu7/qubes_suddenly_broken_warning_devqubes_dom0root/?rdt=61290
https://forum.qubes-os.org/t/qubes-dom0-root-does-not-exist/8600/23
https://forum.qubes-os.org/t/emergency-dracut-shell-qubes-not-working-for-no-reason/6717/3
Hi everyone, I think something went wrong around here… After researching this error, I saw that some people also experienced a similar problem, but always with a peculiarity that made their case unique. That’s why I am creating this topic.
After months of using Qubes 4.1.2 without any problems, I came across the following error screen after starting the system today:
[InShot_20231109_163807464]
This error appears exactly after I enter the password to decrypt the disk. I guarantee I’m not get…
Possible LVM corruption because there was not enough free space?
Doesn’t qubes freeze a VM before the underlying LVM runs out of space?
When I did set up the download VM I did not get a warning.
Anyway, if that was the reason, how to fix it?
You can check the rdsosreport.txt generated by dracut when it fails and if you have the same error:
Check of pool qubes_dom0/root-pool failed (status:1). Manual repair required!
Then I guess the issue shouldn’t be specific to Qubes OS and you can try to search of a way to fix your LVM that works for general Linux system using LVM.
Try to search for "Check of pool" "Manual repair required!"
in search engine.
I dont understand why, but after a few more reboots the system suddenly boots up as normal.
vgdisplay in dom0 shows that VG qubes_dom0 has more than 80 GiB free
Maybe there is a hardware issue with your disk?
Check its SMART.
If I do
“sudo smartctl -t short /dev/nvme0”
and later try
“sudo smartctl -a /dev/nvme0”
I get an input/output error for sudo. I never tested a system-nvme before, is that an expected error or does this mean the disk is broken?
If I do
“sudo smartctl -a /dev/nvme0”
after the next boot it shows “Aborted: Controller reset” - however I dont know whether it was reset because of an error or because of the foreced reboot.
Your disk name is /dev/nvme0n1, not /dev/nvme0.
If I run smartctl -a on nvme0, the self-test log is shown. If I do the same for nvme0n1 I get “Read self-test log failed”
if I run sudo smartctl -t short /ev/nvme0n1 I get “Read self-test log failed” as well instead of “Test started”.
Yes, my bad, /dev/nvme0 seems to be the correct one. nvme0n1 is just a block device of NVMe and nvme0 is a raw device that is needed by smartctl.