I recently make a complete backup of my R4.0 and reinstalled everything as R4.2. I found that R4.2 dislike R4.0 qrexec agent (some fedora 30 qrexec agent does not work at all); f33 qrexec agent is working yet marginally.
However I compared the booting time on the same machine (though not many times). the fedora 33 templated vm inherited years ago boots FASTER than the brand new fedora 38 templated vm. Faster is also more stable as the fedora 38 vms frequently suffer from 60s timeout (i used hdd then; do not suggest a ssd here that is not the topic here)
I am surprised and this may affect my progress on reinstalling my production machine as R4.2. I wonder anyone has looked at this or can reproduce this stably and bisect somehow.
Also I wonder any tricks to diagnose the problem. for example how to make /init in initramfs more verbose - I saw a time spike there. Also for example how to make statistics on how many disk read happens on the vm boot and when.
If Fedora uses systemd you can use systemd-analyze blame to compare the startup of the fast and slow system, the new system could also use a newer version of systemd with more features.
Unfortunately that’s the same experience i made with upgrading from R4.0 to R4.1. And again from R4.1 to R4.2
SSD helps a lot. BTRFS on R4.1 too, but not anymore on R4.2
Might be interesting for you too:
edit:
Sorry, in hindsight system feels not as smooth as on R4.0, but VMs boot times are fine.
I’m not precisely sure if fedora is slower at boot for me since I use debian most of the time. However I can share my systemd-analyze blame output. Maybe it is of some help to someone.
unbound anchor is also slow for the new fedora-39-xfce.
However leaving the fedora-39-xfce template running and in idle for about 5min after the boot fixes the issue for the following boots, so anchor takes less then a second. I suppose manually executing unbound anchor once would solve this for the following boots.