First try to upgrade ended with fail

First try to upgrade ended with fail.

Mostly because my sys-whonix connected to custom vpn app and auto templates upgrade failed. ( sys-firewall ← vpn ← sys-whonix )

From the beginning, it tried to shutdown all qubes including vpn qubes, then failed on whonix upgrade steps.
After I recognize that and found the way to add this vpn to exclude and begin from upgrade steps again…

(with cry) it upgraded all templates… but each time it actually not upgraded them automatically and at the end of “postscript” freeze at all templates near "Leaving diversion /etc/init/serial.conf…and it was only possible to continue withctrl-c` and because of that failed (maybe actually because some qubes or templates was running at the end before the next step and failed to shutdown as expected because whonix connected to vpn qube (and maybe vpn app was not connected to vpn)).

As I remember, the last what I saw the script tried to download new screensaver, but does not ask to apply/install changes and something failed

At the end when I tried to restart this step to download them and install… (I also changed download global upgrade qube for dom0/templates) but it was compliantly broken, because “stop all behavior” of the script at the begining. sys-firewall/sys-net templates was already upgraded and after restart it can’t “connect/talk” with ungraded 4.2 dom0. It failed with report that sys-firewall need to contain “dnf”.

@ben-grande Is there a way to “kickstart” the preloading of DispVMs, assuming everything else works?

Scenario:

  1. I have X dispVMs preloaded
  2. When I leave my PC unattended, I sometimes execute a qvm-shutdown --all command (with some domains excluded)
  3. This shuts down all preloaded DispVMs (which I think is what I want)

Now I’d like a way to repreload the DispVMs that were shut down after I come back to my PC. So far I’ve noticed that if I start another DispVM (from the preloaded DispVM template), the rest also start, but this leaves me with an extra DispVM that I did not really want to start (I only wanted to start the DispVM preload process again).

Is there maybe a signal that I can send to some process to make it aware that it should start preloading again, after the other DispVMs were shut down?

From dom0:

systemctl restart qubes-preload-dispvm
1 Like

after second try I’m at 4.3

This time I did step by step update and first minor issue that -l flag does not working.

Then…

  1. after reboot with default kernel it was not possible to boot the system, because `systemd[1]: Unable to fix SELinux security context of *** Permission denied (a lot of lines) and
    [!!!] Failed allocate manger object.
    But with some other kerner it booted.

  2. At STAGE 5 it fail with this:

--> (STAGE 5) Enabling disposable qubes preloading
/usr/lib/python3.13/site-packages/salt/grains/core.py:2953: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  start_time = datetime.datetime.utcnow()
/usr/lib/python3.13/site-packages/salt/utils/jid.py:19: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  return datetime.datetime.utcnow()
[ERROR   ] An un-handled exception was caught by Salt's global exception handler:
SaltRenderError: Could not find relpath for qvm.disposable-preload.top
Traceback (most recent call last):
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 319, in toppath
    saltenv = self.get(path, 'saltenv')
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 138, in get
    return matcher.getter(key, element)(element)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
AttributeError: 'str' object has no attribute 'saltenv'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/qubesctl", line 130, in <module>
    salt_call()
    ~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/scripts.py", line 444, in salt_call
    client.run()
    ~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/cli/call.py", line 50, in run
    caller.run()
    ~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/cli/caller.py", line 95, in run
    ret = self.call()
  File "/usr/lib/python3.13/site-packages/salt/cli/caller.py", line 200, in call
    ret["return"] = self.minion.executors[fname](
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        self.opts, data, func, args, kwargs
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 160, in __call__
    ret = self.loader.run(run_func, *args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1269, in run
    return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1284, in _run_as
    ret = _func_or_method(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/executors/direct_call.py", line 10, in execute
    return func(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 160, in __call__
    ret = self.loader.run(run_func, *args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1269, in run
    return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1284, in _run_as
    ret = _func_or_method(*args, **kwargs)
  File "/var/cache/salt/minion/extmods/modules/topd.py", line 73, in enable
    return TopUtils(__opts__, **kwargs).enable(paths, saltenv)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 596, in enable
    toppaths, unseen = self.prepare_paths(paths)
                       ~~~~~~~~~~~~~~~~~~^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 449, in prepare_paths
    toppath = self.toppath(path)
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 328, in toppath
    saltenv = saltenv or self.saltenv(path, saltenv)
                         ~~~~~~~~~~~~^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 127, in saltenv
    relpath = self.relpath(path)
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 485, in relpath
    return self.path(path, saltenv)
           ~~~~~~~~~^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 267, in path
    return super(TopUtils, self).path(path, saltenv, path_type=path_type)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 444, in path
    raise SaltRenderError('Could not find relpath for {0}'.format(path))
salt.exceptions.SaltRenderError: Could not find relpath for qvm.disposable-preload.top
Traceback (most recent call last):
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 319, in toppath
    saltenv = self.get(path, 'saltenv')
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 138, in get
    return matcher.getter(key, element)(element)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
AttributeError: 'str' object has no attribute 'saltenv'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/qubesctl", line 130, in <module>
    salt_call()
    ~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/scripts.py", line 444, in salt_call
    client.run()
    ~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/cli/call.py", line 50, in run
    caller.run()
    ~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/salt/cli/caller.py", line 95, in run
    ret = self.call()
  File "/usr/lib/python3.13/site-packages/salt/cli/caller.py", line 200, in call
    ret["return"] = self.minion.executors[fname](
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        self.opts, data, func, args, kwargs
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 160, in __call__
    ret = self.loader.run(run_func, *args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1269, in run
    return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1284, in _run_as
    ret = _func_or_method(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/executors/direct_call.py", line 10, in execute
    return func(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 160, in __call__
    ret = self.loader.run(run_func, *args, **kwargs)
  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1269, in run
    return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
---> (STAGE 5) Cleaning up salt
Error on ext_pillar interface qvm_prefs is expected
/usr/lib/python3.13/site-packages/salt/grains/core.py:2953: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  start_time = datetime.datetime.utcnow()
/usr/lib/python3.13/site-packages/salt/utils/jid.py:19: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  return datetime.datetime.utcnow()
local:
    True
/usr/lib/python3.13/site-packages/salt/grains/core.py:2953: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  start_time = datetime.datetime.utcnow()
[CRITICAL] Specified ext_pillar interface qvm_features is unavailable
[CRITICAL] Specified ext_pillar interface qvm_prefs is unavailable
[CRITICAL] Specified ext_pillar interface qvm_tags is unavailable
/usr/lib/python3.13/site-packages/salt/utils/jid.py:19: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
  return datetime.datetime.utcnow()
local:
    ----------
    beacons:
    clouds:
    engines:
    executors:
    grains:
        - grains.boot_mode
        - grains.pci_devs
        - grains.redefined_dom0_grains
        - grains.whonix
    log_handlers:
    matchers:
    modules:
        - modules.debug
        - modules.module_utils
        - modules.qubes
        - modules.topd
    output:
    pillar:
        - pillar.qvm_features
        - pillar.qvm_prefs
        - pillar.qvm_tags
    proxymodules:
    renderers:
    returners:
    sdb:
    serializers:
    states:
        - states.debug
        - states.status
    thorium:
    tops:
    utils:
        - utils.__init__
        - utils.fileinfo
        - utils.matcher
        - utils.nulltype
        - utils.pathinfo
        - utils.pathutils
        - utils.qubes_utils
        - utils.toputils
    wrapper:
---> (STAGE 5) Adjusting default kernel
Changing default kernel from 6.12.39-1.fc37 to 6.12.39-1.fc37


  File "/usr/lib/python3.13/site-packages/salt/loader/lazy.py", line 1284, in _run_as
    ret = _func_or_method(*args, **kwargs)
  File "/var/cache/salt/minion/extmods/modules/topd.py", line 73, in enable
    return TopUtils(__opts__, **kwargs).enable(paths, saltenv)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 596, in enable
    toppaths, unseen = self.prepare_paths(paths)
                       ~~~~~~~~~~~~~~~~~~^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 449, in prepare_paths
    toppath = self.toppath(path)
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 328, in toppath
    saltenv = saltenv or self.saltenv(path, saltenv)
                         ~~~~~~~~~~~~^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 127, in saltenv
    relpath = self.relpath(path)
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 485, in relpath
    return self.path(path, saltenv)
           ~~~~~~~~~^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/toputils.py", line 267, in path
    return super(TopUtils, self).path(path, saltenv, path_type=path_type)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/cache/salt/minion/extmods/utils/pathutils.py", line 444, in path
    raise SaltRenderError('Could not find relpath for {0}'.format(path))
salt.exceptions.SaltRenderError: Could not find relpath for qvm.disposable-preload.top

How to fix 1 for new kernel and 2 issue with STEP 5 ?
Thanks

1 Like

There is already a PR to fix this bug:

For the rest of issues, thank you very much for the feedback.

1 Like

ok, please somebody help me fix this one with latest kernel.
I don’t want to migrate to 4.2 back :slight_smile:

Which was added on version 4.3.5

Which version do you have?

rpm -qf /srv/pillar

Edit:

Or check if the files exist directly:

sudo find /srv/{pillar,formulas} -type f -name 'disposable-preload*'

rpm -qf /srv/pillar
qubes-mgmt-salt-config-4.2.2-1.fc41.noarch

Old… why?.. Interesting, at global config I’m at “stable”, but before upgrade I switched to testing (but maybe it was first time when everything failed).

I will try to switch to all testing, update and re-execute this step again.

Edit:

After update from all testing still the same pillar package.
And no 'disposable-reload’ files*

It is available in current testing of fc41.

The correct package name is qubes-mgmt-salt-dom0-virtual-machines-4.3.5-1.fc41.rpm (some error with the spec not identifying the files as being owned by this package also):

dnf info qubes-mgmt-salt-dom0-virtual-machines

As of last night 2 things that I was experiencing was network not restarting after a sys-net restart and disposable preloaded qubes (preloaded but not yet full qubes) not working properly after a suspend. The preloaded qubes where there but shutdown without any apps loading a few minutes after starting them. New preloaded dvm worked properly after the pre suspend ones failed. This was a fresh install with fedora templates for the net firewall and usb qubes.

Not sure if that is a known bug. I can try and provide more info if needed.

This might be a little bit complicated. Please open an issue on Github for tracking. I am running some other tests now and later I will retry this. About the logs to provide:

  1. Fedora 42 right?
  2. Did you install something in the template before testing preloads?
  3. How much RAM your system has?
  4. Did you have many other qubes running also? If yes, how many?
  5. Are you using LVM or something else?
  6. Now debugging: 1. Set the feature to 0
    1. Watch the qubesd journal: sudo journalctl -fu qubesd -o cat | grep -v pam_unix, pay attention for WARNING log level
    2. Set the feature value to 4 again and check with which memory the qube was paused
    3. Post the qubesd logs here.

Did you mean post-suspend?

Pre-suspend - The preloaded dvm (but not yet full Qubes) that were there before suspend
After - Suspend when trying to convert them to full vm qubes failed - they started (but nothing happened) and then shutdown - but any new preloaded dvm (post suspend) that take there place of the pre suspend ones work fine once started as full qubes.

Hope that makes sense.

Thank you. After installation the last step 5 (-x) ended as expected.

Mayby any ideas/suggestions how to solve SElinux issue with new kernel?

This might be it: Add global preload to template gathering by ben-grande · Pull Request #718 · QubesOS/qubes-core-admin · GitHub

I’m having an issue with the latest debian-13-minimal template in 4.3RC1. It does not give me a working network connection after salting the usual packages that I’ve configured. A couple weeks ago with the previous version all is fine with networking using the same config.

On closer inspection of my sys-net appVM which is based off this minimal template I can see that there are multiple failures to initialise the wpa_supplicant service of NetworkManager… restarting NM has no effect.

Does anyone know what differences might exist between the current and previous template versions? I know trixie has only just gone live and maybe I’m now missing package(s) that I previously did not need?

Unrelated to those issues?

Sorry Ben for the late reply but if you want me to test this, then unfortunately I don’t think my technical ability is up to it. But thank you for the fix.

I’ve installed R4.3.0-rc1 on a few systems using the defaults (Fedora for sys-xyz) and a couple items of note:

  1. When “Qubes Update” runs it removes my AX210 card and WiFi. A reboot is a simple fix
  2. On an 8Gb i7 system it was too slow to be usable, so a minimum recommendation of 12 or 16Gb is warranted

It definitely seems faster, though I haven’t run benchmarks, and the desktop environment is much cleaner. Quite a nice upgrade.

what did you have running?

Very little. Firefox in a Fedora Qube and another Debian Qube looking at files.