Power suddenly cut -- now sys-whonix can't connect to anything

Hi,
I am super tired so I may be missing something obvious or not posting debug info that might be useful, so feel free to ask for whatever information may be helpful.
A couple nights ago, i accidentally yanked the power cord to my qubes os machine, so it suddenly shut down. after this, I was not able to connect to tor using anon-whonix (or any other qube networked through sys-whonix), whereas before I had no issues at all. I am, however, able to connect to the internet and do anything on the clearnet, so it seems that sys-net and sys-firewall are not affected, only sys-whonix.

anon-whonix sdwdate’s status shows:
Last message from anon-whonix sdwdate:

Preparation not done yet. More more information, see: sdwdate-gui → right click → Open sdwdate’s log

When I opened sdwdate’s log, it shows this:

2025-09-06 00:49:00 - sdwdate - INFO - PREPARATION: running onion-time-pre-script...
2025-09-06 00:49:00 - sdwdate - INFO -
__ ### START: ### /usr/libexec/helper-scripts/onion-time-pre-script
__ Status: Subsequent run after boot.
__ Static Time Sanity Check: Within minimum time 'Mon Jul 21 00:00:00 UTC 2025' and expiration timestamp 'Tue May 17 10:00:00 UTC 2033', ok.
__ Tor Bootstrap Result:
check_bootstrap_helper_script: tor-circuit-established-check
tor_bootstrap_timeout_type:
tor_circuit_established_check_exit_code: 124
__ ### END: ### Exiting with exit_code '1' indicating 'wait, show error icon and retry.'.
2025-09-06 00:49:00 - sdwdate - INFO - PREPARATION RESULT: onion-time-pre-script detected a known permanent (until the user fixes it) error status. Consider running systemcheck for more information.
2025-09-06 00:49:00 - sdwdate - INFO -

I then ran systemcheck:

[workstation user ~]% systemcheck
[INFO] [systemcheck] anon-whonix | Whonix-Workstation | whonix-workstation-17 TemplateBased AppVM | Sat Sep  6 01:58:48 AM UTC 2025
[INFO] [systemcheck] Tor Connection Result:
Tor's Control Port could not be reached. Attempt 1 of 5. Could be temporary due to a Tor restart. Trying again...
[INFO] [systemcheck] Tor Connection Result:
Tor's Control Port could not be reached. Attempt 2 of 5. Could be temporary due to a Tor restart. Trying again...
[INFO] [systemcheck] Tor Connection Result:
Tor's Control Port could not be reached. Attempt 3 of 5. Could be temporary due to a Tor restart. Trying again...
[INFO] [systemcheck] Tor Connection Result:
Tor's Control Port could not be reached. Attempt 4 of 5. Could be temporary due to a Tor restart. Trying again...
[ERROR] [systemcheck] Tor Connection Result:
Tor's Control Port could not be reached!

Troubleshooting:
- Confirm that Whonix-Gateway is running.
- Run systemcheck on Whonix-Gateway and confirm success.

- Rerun systemcheck here in this Whonix-Workstation.

(Technical information:)
(tor_circuit_established_check_exit_code: 124)
(tor_bootstrap_timeout_type: )
(tor_bootstrap_status: )
(check_socks_port_open_test: 28)
(Tor Circuit: not established)
zsh: exit 1     systemcheck

Then, i went to sys-whonix and opened the tor control panel. The tor status seems to hang on “Starting” and the progress bar displays 0%. I let this run for atleast 10 minutes with no change. If i go to the logs section in the tor control panel, I just see this:

Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:02... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
Sep 06 02:03... [notice] New control connection opened.
...
...

If I go to sys-whonix’s sdwdate log, it shows this:

2025-09-06 01:44:41 - sdwdate - INFO - PREPARATION: running onion-time-pre-script...
2025-09-06 01:44:41 - sdwdate - INFO -
__ ### START: ### /usr/libexec/helper-scripts/onion-time-pre-script
__ Status: Subsequent run after boot.
__ Static Time Sanity Check: Within minimum time 'Mon Jul 21 00:00:00 UTC 2025' and expiration timestamp 'Tue May 17 10:00:00 UTC 2033', ok.
__ Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
__ Tor circuit: not established
__ Tor Consensus Time Sanity Check: Consensus time sanity check failed.
__ anondate_use: Running 'anondate-set' (by creating file '/run/sdwdate/request_anondate-set')...
__ ### END: ### Exiting with exit_code '2' indicating 'wait, show busy icon and retry.'.
2025-09-06 01:44:41 - sdwdate - INFO - PREPARATION RESULT: onion-time-pre-script recommended to wait. Consider running systemcheck for more information.
2025-09-06 01:44:41 - sdwdate - INFO -

Running systemcheck in sys-whonix shows the following:

[gateway user ~]% systemcheck                
[INFO] [systemcheck] sys-whonix | Whonix-Gateway | whonix-gateway-17 TemplateBased ProxyVM | Sat Sep  6 02:05:36 AM UTC 2025
[INFO] [systemcheck] Tor Connection Result:
- Connecting for 0 seconds. | 0 % done. 
- Tor Circuit: not established.
- Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
- Timesync status: not done.
- sdwdate reports: Preparation not done yet. More more information,             see: sdwdate-gui -> right click -> Open sdwdate's log
- onion-time-pre-script reports: 
__ ### START: ### /usr/libexec/helper-scripts/onion-time-pre-script
__ Status: Subsequent run after boot.
__ Static Time Sanity Check: Within minimum time 'Mon Jul 21 00:00:00 UTC 2025' and expiration timestamp 'Tue May 17 10:00:00 UTC 2033', ok.
__ Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
__ Tor circuit: not established
__ Tor Consensus Time Sanity Check: Consensus time sanity check failed.
__ anondate_use: Running 'anondate-set' (by creating file '/run/sdwdate/request_anondate-set')...
__ ### END: ### Exiting with exit_code '2' indicating 'wait, show busy icon and retry.'.
[INFO] [systemcheck] Tor Connection Result:
- Connecting for 2 seconds. | 0 % done. 
- Tor Circuit: not established.
- Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
- Timesync status: not done.
- sdwdate reports: Preparation not done yet. More more information,             see: sdwdate-gui -> right click -> Open sdwdate's log
- onion-time-pre-script reports: 
__ ### START: ### /usr/libexec/helper-scripts/onion-time-pre-script
__ Status: Subsequent run after boot.
__ Static Time Sanity Check: Within minimum time 'Mon Jul 21 00:00:00 UTC 2025' and expiration timestamp 'Tue May 17 10:00:00 UTC 2033', ok.
__ Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
__ Tor circuit: not established
__ Tor Consensus Time Sanity Check: Consensus time sanity check failed.
__ anondate_use: Running 'anondate-set' (by creating file '/run/sdwdate/request_anondate-set')...
__ ### END: ### Exiting with exit_code '2' indicating 'wait, show busy icon and retry.'.
[INFO] [systemcheck] Tor Connection Result:
- Connecting for 4 seconds. | 0 % done. 
- Tor Circuit: not established.
- Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
- Timesync status: not done.
- sdwdate reports: Preparation not done yet. More more information,             see: sdwdate-gui -> right click -> Open sdwdate's log
- onion-time-pre-script reports: 
__ ### START: ### /usr/libexec/helper-scripts/onion-time-pre-script
__ Status: Subsequent run after boot.
__ Static Time Sanity Check: Within minimum time 'Mon Jul 21 00:00:00 UTC 2025' and expiration timestamp 'Tue May 17 10:00:00 UTC 2033', ok.
__ Tor reports: NOTICE BOOTSTRAP PROGRESS=0 TAG=starting SUMMARY="Starting"
__ Tor circuit: not established
__ Tor Consensus Time Sanity Check: Consensus time sanity check failed.
__ anondate_use: Running 'anondate-set' (by creating file '/run/sdwdate/request_anondate-set')...
__ ### END: ### Exiting with exit_code '2' indicating 'wait, show busy icon and retry.'.

It continues to repeat the same thing every two seconds, but im snipping it there for length sake.

I have tried rebooting already (several times), I have manually verified that everything’s netvm is set correctly, via both the qubes manager and dom0’s cli. I also completely deleted my original sys-whonix and remade it from the template, so the sys-whonix I am doing this debugging on is practically brand new. I was also trying to use chatgpt last night to debug, and I had it make me a report of everything that was done, which I am going to paste here:

Qubes OS Networking Debug Report: sys-whonix Connectivity Failure

Summary of the Problem
After a sudden power loss, the sys-whonix Qube lost network functionality. Following the incident:
sdwdate and onion-time-pre-script report failure to connect to Tor.
Manual attempts to restore networking (e.g. ip link set ... up, systemctl restart qubes-network) were ineffective.
Recreating sys-whonix from the whonix-gateway-17 template did not solve the issue.
The new sys-whonix instance also reports:
"ERROR: check network interfaces Result: network interface eth0 not up!"

Meanwhile:
sys-net and sys-firewall have confirmed working connectivity (ping 8.8.8.8 successful).
sys-whonix shows no default route, and interfaces are often DOWN or in /32 mode.

Likely Root Cause
One or more of the following is likely:
Virtual NIC misconfiguration or failure to attach between sys-whonix and its NetVM.
Qubes networking stack (qubes-core-agent-networking) is not initializing properly in sys-whonix.
Dom0 metadata became inconsistent after the power failure.
Possible missing udev/network triggers during Qube startup (especially after cloning or recreating the Qube).

Previous Fix Attempts
Manually brought up interfaces using ip link set eth0 up.
Restarted qubes-network systemd service.
Verified and reconfigured NetVM chain (to ensure sys-whonix → sys-firewall → sys-net).
Recreated sys-whonix from clean template.
Ran systemcheck and confirmed failures due to lack of networking.
Observed nslookup and ping failures in sys-whonix.
Recommended Debug Steps (Post-Reboot Checklist)
Verify NetVM Chain from dom0

I used this summary just tonight to try to get chatgpt to fix my errors but it just kept going in circles and getting nowhere. it seemed convinced I had some issue with my routing tables, so ill post some of that info here.
on sys-whonix:

[gateway user ~]% ip a          
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:5e:6c:00 brd ff:ff:ff:ff:ff:ff
    inet 10.137.0.9/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.138.21.13/24 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b6:15:59:46:64:0c brd ff:ff:ff:ff:ff:ff
    inet 10.137.0.9/32 brd 10.255.255.255 scope global eth1
       valid_lft forever preferred_lft forever
4: vif3.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff

[gateway user ~]% ip r
10.138.21.0/24 dev eth0 proto kernel scope link src 10.138.21.13

on sys-firewall:

[user@sys-firewall ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group 1 qlen 1000
    link/ether 00:16:3e:5e:6c:00 brd ff:ff:ff:ff:ff:ff
    inet 10.138.21.12/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd09:24ef:4179::a8a:14e2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe5e:6c00/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: vif6.0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group 2 qlen 1000
    link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
    inet 10.138.21.12/32 scope global vif6.0
       valid_lft forever preferred_lft forever
    inet6 fd09:24ef:4179::a8a:14e2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::fcff:ffff:feff:ffff/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
6: vif9.0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group 2 qlen 1000
    link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
    inet 10.138.21.12/32 scope global vif9.0
       valid_lft forever preferred_lft forever
    inet6 fd09:24ef:4179::a8a:14e2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::fcff:ffff:feff:ffff/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

[user@sys-firewall ~]$ ip r
default via 10.137.0.7 dev eth0 onlink 
10.137.0.7 dev eth0 scope link 
10.137.0.9 dev vif6.0 scope link metric 32746 
10.138.4.119 dev vif9.0 scope link metric 32743

I ran a command to try and set sys-whonix’s default gateway to 10.137.0.7 while troubleshooting with chatgpt, but it said it was invalid:

[gateway user ~]% sudo ip route add default via 10.137.0.7 dev eth0
Error: Nexthop has invalid gateway.
zsh: exit 2     sudo ip route add default via 10.137.0.7 dev eth0

it did have me make some changes to the routing tables, before my browser suddenly closed and I lost access to the chat log, so im not super sure exactly what all I did, but I do know it made me flush the routing table or something similar (i think, half asleep so my memory isnt great).
Any idea where to go from here?

Restart sys-whonix and post the output of ip a and ip r again.

I just started my system and ran ip a and ip r again, here is the output:

[gateway user ~]% ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:16:3e:5e:6c:00 brd ff:ff:ff:ff:ff:ff
    inet 10.137.0.9/32 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 9a:d2:02:c4:45:5c brd ff:ff:ff:ff:ff:ff
    inet 10.137.0.9/32 brd 10.255.255.255 scope global eth1
       valid_lft forever preferred_lft forever
[gateway user ~]% ip r
[gateway user ~]% 

ip r outputted nothing, i didnt remove any lines of terminal output. I also restarted it after this to see if maybe restarting after everything else booted would change anything, but it is the same as what I put above.

I’m not sure, but maybe your computer clock is out of sync. Could you check the output of timedatectl?

[gateway user ~]% timedatectl
               Local time: Sat 2025-09-06 23:35:38 UTC
           Universal time: Sat 2025-09-06 23:35:38 UTC
                 RTC time: n/a
                Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: no
              NTP service: n/a
          RTC in local TZ: no

it says system clock isnt synchronized, but the time shown is correct.

What’s the output of this command in sys-whonix terminal?

sudo systemctl status qubes-network-uplink qubes-network-uplink@eth0 qubes-network-uplink@eth1 | cat

I’ve ran into the same issue. What has always worked for me is shutting the machine down completely and then doing a hard power cycle/power drain.

Something about squeezing all the power out of the capacitors and nuking the NVRam makes it go back to working perfectly after a sudden power cut.

[gateway user ~]% sudo systemctl status qubes-network-uplink qubes-network-uplink@eth0 qubes-network-uplink@eth1 | cat
Ă— qubes-network-uplink.service - Qubes network uplink wait
     Loaded: loaded (/lib/systemd/system/qubes-network-uplink.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Mon 2025-09-08 23:16:31 UTC; 6min ago
    Process: 1117 ExecStart=/usr/lib/qubes/init/network-uplink-wait.sh (code=exited, status=1/FAILURE)
   Main PID: 1117 (code=exited, status=1/FAILURE)
        CPU: 21ms

Sep 08 23:17:02 host systemd[1]: Starting qubes-network-uplink.service - Qubes network uplink wait...
Sep 08 23:16:31 host systemd[1]: qubes-network-uplink.service: Main process exited, code=exited, status=1/FAILURE
Sep 08 23:16:31 host systemd[1]: qubes-network-uplink.service: Failed with result 'exit-code'.
Sep 08 23:16:31 host systemd[1]: Failed to start qubes-network-uplink.service - Qubes network uplink wait.

Ă— qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup
     Loaded: loaded (/lib/systemd/system/qubes-network-uplink@.service; static)
     Active: failed (Result: exit-code) since Mon 2025-09-08 23:16:31 UTC; 6min ago
    Process: 1233 ExecStart=/usr/lib/qubes/setup-ip add eth0 (code=exited, status=2)
   Main PID: 1233 (code=exited, status=2)
        CPU: 41ms

Sep 08 23:17:02 host systemd[1]: Starting qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup...
Sep 08 23:16:34 host setup-ip[1277]: Error: ipv6: IPv6 is disabled on this device.
Sep 08 23:16:31 host systemd[1]: qubes-network-uplink@eth0.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Sep 08 23:16:31 host systemd[1]: qubes-network-uplink@eth0.service: Failed with result 'exit-code'.
Sep 08 23:16:31 host systemd[1]: Failed to start qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup.

â—‹ qubes-network-uplink@eth1.service - Qubes network uplink (eth1) setup
     Loaded: loaded (/lib/systemd/system/qubes-network-uplink@.service; static)
     Active: inactive (dead)
zsh: exit 3     sudo systemctl status qubes-network-uplink qubes-network-uplink@eth0  | 
zsh: done       cat

This did not work sadly. I am on a laptop (thinkpad) with no battery, and I did the following to do a power drain:

  • unplug everything (power, peripherals, etc)
  • hold power button for 1 minute
  • turn machine back on

After some troubleshooting I fixed this! I’m going to post a detailed write up here for anyone who might come across this in the future:

Issue
Sys-whonix is unable to connect to the tor network due to IPV6 being disabled. This does not affect the connectivity of sys-net and sys-firewall.

Diagnosis
To figure out if this is the same issue you have, you can use the command that @MellowPoison posted above: sudo systemctl status qubes-network-uplink qubes-network-uplink@eth0 qubes-network-uplink@eth1 | cat
In the output for this command, look for errors regarding ipv6. This is the line I saw that gave it away:
Sep 08 23:16:34 host setup-ip[1277]: Error: ipv6: IPv6 is disabled on this device.

Solution
To fix this, open a dom0 terminal and type the following commands:

qvm-features sys-net ipv6 ''
qvm-features sys-firewall ipv6 ''
qvm-features sys-whonix ipv6 ''

I found this solution in this forum post: Sys-whonix cannot establish any tor circuit - #17 by rrn
In his post he did not mention doing the command for sys-whonix, but I found that doing only sys-net and sys-firewall did not work for me.

Notes
I actually do not think this was related to the power being cut to my machine. I have a vague memory of messing around with something to do with ipv6 in the same session before my power cord got yanked, so it was likely that that caused it rather than the power being cut.
Thanks for everyone’s help! <3

1 Like