Qube shuts down immediately after trying to start

A qube that I use every day suddenly stopped working. It will start, and then shut down with an error pointing me to the log. There is no difference when I try to start with qvm-run in dom0. The startup sequence in the log seems okay until the following final lines. Can anyone see what the problem is, or how to fix it? Thank you.

2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Created slice .[0;1;39msystem-qubes\x2dnetw….[0mSlice /system/qubes-network-uplink.
[2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Finished .[0;1;39mmodprobe@loop.service.[0m - Load Kernel Module loop.
[2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Finished .[0;1;39mdev-xvdc1-swap.service.[0m - Enable swap on /dev/xvdc1 early.
[2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Finished .[0;1;39mmodprobe@dm_mod.service.[0m - Load Kernel Module dm_mod.
[2026-02-15 07:27:34] [.[0;1;31mFAILED.[0m] Failed to start .[0;1;39mqubes-db.service.[0m - Qubes DB agent.
[2026-02-15 07:27:34] See 'systemctl status qubes-db.service' for details.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mqubes-sysini…ice.[0m - Init Qubes Services settings.
[2026-02-15 07:27:34]          Starting .[0;1;39msystemd-random-seed.service.[0m - Load/Save OS Random Seed...
[2026-02-15 07:27:34] [.[0;1;31mFAILED.[0m] Failed to start .[0;1;39msystemd-random-see…service.[0m - Load/Save OS Random Seed.
[2026-02-15 07:27:34] See 'systemctl status systemd-random-seed.service' for details.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39msysinit.target.[0m - System Initialization.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mdracut-shutd….[0mRestore /run/initramfs on shutdown.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mqubes-antisp….[0mQubes anti-spoofing firewall rules.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mnetwork-pre.target.[0m - Preparation for Network.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mqubes-networ….[0m Qubes network uplink (eth0) setup.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39mqubes-iptabl…ice.[0m - Qubes base firewall settings.
[2026-02-15 07:27:34] [.[0;1;38:5:185mDEPEND.[0m] Dependency failed for .[0;1;39msystemd-pcrp…e.service.[0m - TPM PCR Barrier (User).
[2026-02-15 07:27:34]          Starting .[0;1;39mdracut-shutdown-onfailure….[0mtdown failure to perform cleanup...
[2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Finished .[0;1;39mdracut-shutdown-onfailure….[0mhutdown failure to perform cleanup.
[2026-02-15 07:27:34] [.[0;32m  OK  .[0m] Finished .[0;1;39msystemd-vconsole-setup.service.[0m - Virtual Console Setup.

This seems to be the problem, but the logs aren’t very verbose. Do you think you could manage to run systemctl status qubes-db.service in the qube before it shuts down?

There might be a better way, but this should work. Start this loop in a dom0 terminal and then launch your problem qube.

while :; do qvm-run -u root --no-autostart --pass-io QUBENAME -- systemctl status qubes-db.service; sleep 0.5; done

1 Like

Clever, but didn’t work. The qube shut down same as always, with the same error. There was no output on standard out in the dom0 terminal where I ran this. Should I be looking somewhere else for the results of systemctl status qubes-db.service from the qube?

I’m curious to know the above answer, but it appears that the problem is because this qube is based on a fedora-42 template that has been having problems since the dom0 upgrade a few days ago. So I’m closing this as "solved, because it’s the same problems as Did 11/02/26 dom0 updates break qvm-run? - #4 by dhimh and therefore possibly Root access fails on templates, causing qubes-dist-upgrade error.

(meaning the qube does open when I change the template to one that came with 4.3)

Thank you for chiming in to help.

1 Like

I don’t think internal qube journals are forwarded to dom0 and they get cleared due to root volume non-persistence, unless you’re using a StandaloneVM. The console logs do get forwarded, though, and they may give you some information. The command to view them would be cat /var/log/xen/console/guest-QUBENAME.log.

I have no idea what you’re saying. I’m sure you’re trying to answer the question I asked, whcih was basically: what was the command you gave me supposed to do? I still don’t have an answer to that.
It’s moot for this issue, but I am still curious. Should the output have been on stdout? If not, then where?