What is R4.2/fedora 38 adding since R4.0/fedora 33, making VM boot significant longer

I recently make a complete backup of my R4.0 and reinstalled everything as R4.2. I found that R4.2 dislike R4.0 qrexec agent (some fedora 30 qrexec agent does not work at all); f33 qrexec agent is working yet marginally.

However I compared the booting time on the same machine (though not many times). the fedora 33 templated vm inherited years ago boots FASTER than the brand new fedora 38 templated vm. Faster is also more stable as the fedora 38 vms frequently suffer from 60s timeout (i used hdd then; do not suggest a ssd here that is not the topic here)

I am surprised and this may affect my progress on reinstalling my production machine as R4.2. I wonder anyone has looked at this or can reproduce this stably and bisect somehow.

Also I wonder any tricks to diagnose the problem. for example how to make /init in initramfs more verbose - I saw a time spike there. Also for example how to make statistics on how many disk read happens on the vm boot and when.

If Fedora uses systemd you can use systemd-analyze blame to compare the startup of the fast and slow system, the new system could also use a newer version of systemd with more features.

Unfortunately that’s the same experience i made with upgrading from R4.0 to R4.1. And again from R4.1 to R4.2
SSD helps a lot. BTRFS on R4.1 too, but not anymore on R4.2
Might be interesting for you too:

edit:
Sorry, in hindsight system feels not as smooth as on R4.0, but VMs boot times are fine.

I’m not precisely sure if fedora is slower at boot for me since I use debian most of the time. However I can share my systemd-analyze blame output. Maybe it is of some help to someone.

$ sudo systemd-analyze blame
17.520s unbound-anchor.service
 1.180s dev-mapper-dmroot.device
 1.110s dev-xvdc1.device
  825ms qubes-sync-time.service
  718ms systemd-journal-flush.service
  712ms qubes-mount-dirs.service
  585ms qubes-rootfs-resize.service
  523ms systemd-udevd.service
  450ms user@1000.service
  401ms polkit.service
  393ms qubes-misc-post.service
  374ms abrtd.service
  320ms systemd-homed.service
  263ms qubes-antispoof.service
  263ms systemd-modules-load.service
  245ms dbus-broker.service
  208ms qubes-iptables.service
  183ms systemd-resolved.service
  172ms systemd-udev-trigger.service
  168ms systemd-tmpfiles-setup.service
  159ms xendriverdomain.service
  158ms systemd-random-seed.service
  146ms logrotate.service
  138ms qubes-sysinit.service
  134ms systemd-logind.service
  126ms systemd-tmpfiles-setup-dev.service
  113ms qubes-db.service
   97ms systemd-oomd.service
   80ms dracut-shutdown.service
   79ms modprobe@fuse.service
   76ms user-runtime-dir@1000.service
   71ms systemd-journald.service
   69ms modprobe@loop.service
   68ms systemd-sysctl.service
   67ms qubes-early-vm-config.service
   61ms dev-hugepages.mount
   61ms systemd-network-generator.service
   58ms dev-mqueue.mount
   57ms sys-kernel-debug.mount
   54ms sys-kernel-tracing.mount
   52ms qubes-meminfo-writer.service
   51ms systemd-userdbd.service
   51ms dev-xvdc1-swap.service
   48ms qubes-network-uplink.service
   48ms kmod-static-nodes.service
   44ms modprobe@configfs.service
   43ms raid-check.service
   42ms modprobe@dm_mod.service
   39ms modprobe@drm.service
   36ms modprobe@efi_pstore.service
   30ms systemd-remount-fs.service
   29ms qubes-qrexec-agent.service
   27ms qubes-gui-agent.service
   27ms rtkit-daemon.service
   21ms sys-kernel-config.mount
   20ms systemd-update-utmp.service
   19ms systemd-update-utmp-runlevel.service
   11ms systemd-user-sessions.service
    9ms tmp.mount
    3ms sys-fs-fuse-connections.mount
  101us systemd-homed-activate.service
1 Like

Need to figure out why unbound anchor is taking forever. Networking related?

unbound anchor is also slow for the new fedora-39-xfce.

However leaving the fedora-39-xfce template running and in idle for about 5min after the boot fixes the issue for the following boots, so anchor takes less then a second. I suppose manually executing unbound anchor once would solve this for the following boots.

1 Like