Scripting template admin commands that require no running user processes

In a template I want to change the user ID from 1000 to something else:

root@templatevm:~# usermod -u 1005 user
usermod: user user is currently used by process 691

Ok. Maybe then I might try stepping back to single user mode rather than whack-a-mole with running user processes that auto restart:

[user@dom0 ~]$ qvm-run --pass-io -u root templatevm "systemctl isolate rescue.target"
[user@dom0 ~]$ qvm-run --pass-io -u root templatevm "ls"
2025-11-16 11:33:24.521 qrexec-client[408964]: qrexec-daemon-common.c:232:connect_unix_socket: connect /var/run/qubes/qrexec.templatevm: No such file or directory
2025-11-16 11:33:24.521 qrexec-client[408964]: qrexec-daemon-common.c:29:negotiate_connection_params: write daemon: Bad file descriptor

… Not too surprising this didn’t work (even with --no-gui).

What does work is to reboot and login to a root console with qvm-console-dispvm and then type in my admin commands there. But that’s not scriptable, I don’t think. I could also bypass usermod and/or groupmod and just edit /etc/passwd and /etc/group directly:

qvm-run --pass-io -u root templatevm "sed -r -i 's/^user:x:1000:1000:(.*)/user:x:1005:1005:\1/' /etc/passwd"
qvm-run --pass-io -u root templatevm "sed -r -i 's/^user:x:1000:/user:x:1005:/' /etc/group"

But that’s hacky and gross and surely incomplete.

So what would be the right way to approach this instead?

Using salt?

2 Likes

You’re surely right. I’ve been able to evade salt for template shaping and qube orchestration for years now through abuse of qvm-run. I like the non-abstraction and easy debugging and composability of shell scripting, but there is a limit, and this at least rubs up against it.

Anyway, here’s a race-y and wrong way to do it that works:

change-user-id.bash

#!/bin/bash

set -euo pipefail
set -x

QUBE="$1"
NEW_ID="$2"

script=$(
cat <<'END'
#!/bin/bash

set -euo pipefail
set -x

NEW_ID="$1"

sessions="$( \
  loginctl --no-legend list-sessions \
  | awk '{print $1, $3}' \
  | grep '\<user\>' \
  | awk '{print $1}'
)"
[ "$sessions" ] && loginctl terminate-session $sessions
sleep 2
pkill -u user || true
sleep 2
pkill -u user --signal KILL || true
sleep 2
usermod -u $NEW_ID user
groupmod -g $NEW_ID user
chown -R user:user /rw/home/user

END
)
#
echo "$script" | qvm-run -p -u root "$QUBE" 'cat > /root/temp.sh'
qvm-run -p -u root "$QUBE" 'chmod u+x /root/temp.sh'

qvm-run -p -u root "$QUBE" "systemd-run /root/temp.sh $NEW_ID"
sleep 20

qvm-shutdown --wait "$QUBE"
qvm-start "$QUBE"
qvm-run -p -u root "$QUBE" "rm /root/temp.sh"

Run:

[user@dom0 ~]$ ./change-user-id.bash <template> <new user/group ID number>
1 Like

Salt it like this:

/srv/salt/test/configure.sls:

logout_user:
  cmd.run:
    - name: "kill -9 $(pgrep -u user)"
 
change_useruid:
  user.present:
    - name: user
    - home: /home/user
    - allow_uid_change: True
    - uid: 1010
 
change_directory_owner:
  file.directory:
    - name: /home/user
    - user: user
    - recurse:
      - user

sudo qubesctl --skip-dom0 --show-output --targets=TARGET_LIST state.apply test.configure

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

2 Likes

That’s a lot cleaner.

I wonder what the equivalent Ansible solution would look like. Not that a third solution is needed, but it would be interesting in a Rosetta Stone sense.

Something like change_uid.yaml :

---
- hosts: testqube
  connection: local 
  tasks:
      - name: Kill user session
        ansible.builtin.shell: |
          pkill -u user
      - name: Change user uid
        ansible.builtin.user:
          name: user
          uid: 1111
      - name: Check file ownership
        file:
          recurse: yes
          path: /home/user
          owner: user

ansible-playbook -i inventory change_uid.yaml --user=root

I never presume to speak for the Qubes team. When I comment in the Forum I speak for myself.
2 Likes

Thanks for humoring the question @unman!

So the code/conf is really quite similar, which makes sense, both of them being YAML-ish, and the material distinctions between the technologies being largely backend/architectural. :+1:

1 Like