Hi!
I just wanted to point out that booting Qubes OS 4.1 still hangs with today's updates being applied to Dom0. I have to add "systemd.mask=lvm2-monitor".
Regards,
Ulrich
As I learned recently that I can start a debug shell on tty9 using boot parameter "systemd.debug_shell=1", I inspected what's going on when things seem to hang.
It seems the a "vgchange --monitor y" is running for an extended time. However when I run that command manually, I get a syntax error!
When I run the command "vgchange --monitor y qubes_dom0" it exits immediately.
So it looks to me that some incorrect command is run by systemd.
I'm attaching a photo of my debug session.
See also:
I updated dom0 yesterday after a month away from my laptop. After shutting down I can’t get my computer to start again, it always hangs after seemingly fully loading Qubes (loading bar at 100%).
I’ve tried the different kernels available in the advanced startup without any change to this issue.
Pressing esc for text startup shows LVM might have crashed after the update.
The log reads:
[FAILED] Failed to start LVM event activation on device 253:0.
See ‘systemctl status lvm2-pvscan@253:0.serv…
opened 09:48PM - 10 Mar 22 UTC
T: bug
C: kernel
P: default
needs diagnosis
C: storage
affects-4.1
[How to file a helpful issue](https://www.qubes-os.org/doc/issue-tracking/)
#… ## Qubes OS release
4.1.0
### Brief summary
Booted installer, perpared the disks, ran installation.
All fine without any error.
Then on boot from harddisk nothing happens, i.e. initrd never "switches root"
### Steps to reproduce
- Boot 4.1.0 installation medium (verified OK, BTW)
- Install Qubes OS (I used custom partitioning as per guide, but I had done that before, too)
- Boot newly installed OS
### Expected behavior
Installation continues
### Actual behavior
Boot never finishes (see attached screen photo).

Examining the journal of the failed boots, I found this:
```
Feb 24 23:52:19 dom0 lvm[1326]: Device open /dev/sdd1 8:49 failed errno 2
Feb 24 23:52:20 dom0 kernel: md124: p1
Feb 24 23:52:20 dom0 lvm[1326]: WARNING: Scan ignoring device 8:1 with no paths.
Feb 24 23:52:20 dom0 lvm[1326]: WARNING: Scan ignoring device 8:17 with no paths.
Feb 24 23:52:20 dom0 lvm[1326]: WARNING: Scan ignoring device 8:33 with no paths.
Feb 24 23:52:20 dom0 lvm[1326]: WARNING: Scan ignoring device 8:49 with no paths.
Feb 24 23:52:20 dom0 dmeventd[3705]: dmeventd ready for processing.
Feb 24 23:52:20 dom0 kernel: lvm[1326]: segfault at 801 ip 0000777003fcfdde sp 00007ffd4db1c028 error 4 in libc-2.31.so[777003e91000+150000]
Feb 24 23:52:20 dom0 kernel: Code: fd d7 c9 0f bc d1 c5 fe 7f 27 c5 fe 7f 6f 20 c5 fe 7f 77 40 c5 fe 7f 7f 60 49 83 c0 1f 49 29 d0 48 8d 7c 17 61 e9 c2 04 00 00 <c5> fe 6f 1e c5 fe 6f 56 20 c5 fd 74 cb c5 fd d7 d1 49 83 f8 21>
Feb 24 23:52:20 dom0 kernel: audit: type=1701 audit(1645743140.034:101): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=1326 comm="lvm" exe="/usr/sbin/lvm" sig=11 res=1
Feb 24 23:52:20 dom0 audit[1326]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=1326 comm="lvm" exe="/usr/sbin/lvm" sig=11 res=1
Feb 24 23:52:20 dom0 lvm[3705]: Monitoring thin pool qubes_dom0-pool00-tpool.
Feb 24 23:52:20 dom0 lvm[2561]: 3 logical volume(s) in volume group "qubes_dom0" now active
Feb 24 23:52:20 dom0 systemd[1]: Finished LVM event activation on device 253:0.
```
That segfault doesn't look good!
The last things that seem to happen on boot are:
```
Feb 24 23:52:22 dom0 systemd[1]: Finished udev Wait for Complete Device Initialization.
Feb 24 23:52:22 dom0 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 24 23:52:22 dom0 kernel: audit: type=1130 audit(1645743142.001:103): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=>
Feb 24 23:52:22 dom0 systemd[1]: Starting Activation of DM RAID sets...
Feb 24 23:52:22 dom0 systemd[1]: dmraid-activation.service: Succeeded.
Feb 24 23:52:22 dom0 systemd[1]: Finished Activation of DM RAID sets.
Feb 24 23:52:22 dom0 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=dmraid-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 24 23:52:22 dom0 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=dmraid-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 24 23:52:22 dom0 kernel: audit: type=1130 audit(1645743142.797:104): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=dmraid-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=su>
Feb 24 23:52:22 dom0 kernel: audit: type=1131 audit(1645743142.797:105): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=dmraid-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=su>
```
A key indicator might be "kernel: md124: p1" that could mean mdadm was built without IRST support:
Usually there are four md-devices (two per IRST RAID: one real RAID and one pseudo RAID (IRST)). Maybe LVM chokes on the pseudo RAID.
### Additional info (that might be important)
The system has two IRST software RAID1 (one for Windows, one for Linux, but none for Qubes OS (that is on a different non-RAID disk)
Regards,
Ulrich