Upgrade 4.2 --> 4.3: Documentation of difficulties (`vm-pool` needing manual intervention) and lingering `salt` issues (`ext_pillar interfaces unavailable`)

I upgraded using the two stage qubes-dist-upgrade --releasever=4.3 --all-pre-reboot/manual reboot/qubes-dist-upgrade --releasever=4.3 --all-post-reboot route.

Issues encountered where:

  1. multiple calls to qubes-dist-upgrade --releasever=4.3 --all-pre-reboot where necessary before a final Success was reported/achieved - in my interpretation the reruns where needed as network connection appeared unstable. I did not note any apparent error messages in the last pass, however,

  2. rebooting (which yielded a booting system with qubesd-related errors on the boot console) and calling qubes-dist-upgrade --releasever=4.3 --all-post-reboot made it apparent that something had gone wrong, as no VM (sys-netetc.) was starting due to lvmrelated errors and I needed to call

    lvchange -an /dev/qubes_dom0/*
    lvconvert --repair qubes_dom0/vm-pool
    lvchange -ay /dev/qubes_dom0/*
    

    after which qubes-dist-upgrade --releasever=4.3 --all-post-reboot run through, with

  3. the following salt-related errors remaining:

    ...
    ---> (STAGE 5) Cleaning up salt
    Error on ext_pillar interface qvm_prefs is expected
    local:
        True
    [CRITICAL] Specified ext_pillar interface qvm_features is unavailable
    [CRITICAL] Specified ext_pillar interface qvm_prefs is unavailable
    [CRITICAL] Specified ext_pillar interface qvm_tags is unavailable
    ...
    

I am at a loss how to fix this.
qubestl sys.list_functions | grep qvm produces

    - qvm.check
    - qvm.clone
    - qvm.create
    - qvm.devices
    - qvm.features
    - qvm.firewall
    - qvm.is_halted
    - qvm.is_paused
    - qvm.is_running
    - qvm.kill
    - qvm.pause
    - qvm.prefs
    - qvm.remove
    - qvm.run
    - qvm.service
    - qvm.shutdown
    - qvm.start
    - qvm.state
    - qvm.tags
    - qvm.template_info
    - qvm.template_install
    - qvm.unpause

and dnf list | grep salt

qubes-mgmt-salt.noarch                         4.2.3-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-admin-tools.noarch             4.2.3-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-base.noarch                    4.3.1-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-base-config.noarch             4.1.2-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-base-topd.noarch               4.3.2-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-config.noarch                  4.2.3-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-dom0.noarch                    4.2.3-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-dom0-qvm.noarch                4.3.5-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-dom0-update.noarch             4.3.3-1.fc41                         qubes-dom0-cached
qubes-mgmt-salt-dom0-virtual-machines.noarch   4.3.13-1.fc41                        qubes-dom0-cached
salt.noarch                                    3007.8-1.fc41                        qubes-dom0-cached
salt-minion.noarch                             3007.8-1.fc41                        qubes-dom0-cached

Any hints?

qubesctl saltutil.sync_all produces

 local:
    ----------
    beacons:
    clouds:
    engines:
    executors:
    grains:
    log_handlers:
    matchers:
    modules:
    output:
    pillar:
    proxymodules:
    renderers:
    returners:
    sdb:
    serializers:
    states:
    thorium:
    tops:
    utils:
    wrapper:

and does not fix the issues encountered by qubes-dist-upgrade --releasever=4.3 --all-post-reboot.

qubes-dom0-update --action=reinstall salt salt-minion doesn’t help either.

I hit the LVM issue also on my attempt to do an in place upgrade from 4.2 to 4.3. I tought it was because I recently move to a bigger nvme and played with the LVM size but if you didn’t move to a bigger nvme / ssd recently, there’s clearly something off with the handling of the LVMs during in place upgrades.

Might be unrelated.

Are you using Debian 13? I have encountered several salt related issues with using salt with Deb13.

@neoniobium : in my interpretation the failures are in dom0.

I can add, that I now upgraded a second system (that in contradiction to the first one never had seen any salt customization, so is entirely vanilla) and have seen exactely the same salt-related errors reported above. So this is a thing.

@balin1 Is the LVM error you saw the same as this:

@stage3help : Indeed.

  1. the following salt-related errors remaining:
    ...
    ---> (STAGE 5) Cleaning up salt
    Error on ext_pillar interface qvm_prefs is expected
    local:
        True
    [CRITICAL] Specified ext_pillar interface qvm_features is unavailable
    [CRITICAL] Specified ext_pillar interface qvm_prefs is unavailable
    [CRITICAL] Specified ext_pillar interface qvm_tags is unavailable
    ...
    

I am at a loss how to fix this.

I guess these errors are expected as the note Error on ext_pillar interface qvm_prefs is expected states?