Split-OpenSnitch with Per-VM Identity (qubes-opensnitch-pipes)

Split-OpenSnitch with Per-VM Identity (qubes-opensnitch-pipes)

Hi everyone,

I wanted to share a project I have been working on to solve the “identity crisis” when running OpenSnitch in a split configuration on Qubes OS.

The Problem:
When you connect multiple AppVMs to a central OpenSnitch UI VM, all traffic usually appears as coming from localhost. This makes it impossible to distinguish which VM is triggering a rule or requesting access.

The Solution:
I have created a set of helper scripts: qubes-opensnitch-pipes.

The project uses a client/server model (“pipe” and “piped”) to tunnel traffic through unique ports and dummy network interfaces. This allows the OpenSnitch UI to see a unique IP address for every AppVM (e.g., sys-net appears as 127.0.0.3, vault as 127.0.0.2, etc.).

Status & Compatibility

I have been using this setup on Qubes 4.2 and currently on 4.3.

  • Reliability: It has served me well for daily use, though it is still somewhat experimental.
  • Complexity: It takes a little work to set up initially.
  • Disposables: Managing persistent rules for Disposable VMs can be a bit tricky, but it is doable.

Setup Guide

Here is a summary of how to get it running. For the most up-to-date instructions and source code, please check the GitHub Repository.

Click to view Setup Instructions

Prerequisites

  • Qubes OS
  • OpenSnitch installed on the TemplateVM used by your nodes and the UI VM.

Remember to disable opensnitch systemd service systemctl disable opensnitch and remove /etc/xdg/autostart/opensnitch_ui.desktop. We will handle these steps differently later.


1. Qubes Policy Configuration (dom0)

You must allow TCP connections between your nodes and the UI VM.

Edit /etc/qubes/policy.d/30-opensnitch.policy in dom0:

# OpenSnitch node connections
# 50050 is the handshake/control port
qubes.ConnectTCP +50050 @tag:snitch @default allow target=snitch-ui

# 50051+ are the data ports (one per node slot)
qubes.ConnectTCP +50052 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50053 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50054 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50055 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50056 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50057 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50058 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50059 @tag:snitch @default allow target=snitch-ui

Replace snitch-ui with the actual name of your UI AppVM.

Tagging VMs

Apply the snitch tag to any AppVM that should send data to the UI:

qvm-tags [VM_NAME] add snitch

2. Server Setup (UI VM)

Configure the AppVM where opensnitch-ui will run.

Configuration Map

Create a config file to map your VMs to specific loopback IP addresses. This ensures that sys-net always appears as 127.0.0.3 (for example).

mkdir -p ~/.config/qubes-opensnitch-piped
nano ~/.config/qubes-opensnitch-piped/config.json

Example config.json:

{
    "vault": "127.0.0.2",
    "sys-net": "127.0.0.3",
    "sys-usb": "127.0.0.4",
    "gpu-personal": "127.0.0.5",
    "sys-protonvpn": "127.0.0.6",
    "test": "127.0.0.7"
}

Enable the Helper Service

Create the user-level systemd service file:
File: ~/.config/systemd/user/qubes-opensnitch-piped.service

[Unit]
Description=Qubes OpenSnitch Piped Connector
After=graphical-session.target

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-piped -ad \
    -c /home/user/.config/qubes-opensnitch-piped/config.json
Restart=on-failure

[Install]
WantedBy=default.target

Enable the user-level systemd service for the pipe daemon:

systemctl --user enable --now qubes-opensnitch-piped

Start OpenSnitch UI

Ensure the standard OpenSnitch UI is listening on all interfaces (or specifically the IPv6 wildcard) so it can accept the forwarded traffic. Add this to your /rw/config/rc.local or autostart:

# Start local daemon
systemctl enable --now opensnitch

# Start UI listening on port 50051
opensnitch-ui --socket "[::]:50051" &

3. Client Setup (Node VMs)

There are two ways to configure the nodes: via the TemplateVM (System-wide) or per AppVM (User mode).

Pre-requisite: Persistence

Since AppVMs reset /etc on reboot, you must symlink the rules directory to a persistent location. Run this on your TemplateVM:

mkdir -p /rw/config/opensnitchd/rules
rm -rf /etc/opensnitchd/rules
ln -s /rw/config/opensnitchd/rules /etc/opensnitchd/rules

Option A: System-wide Service (Recommended for TemplateVMs)

This method allows you to deploy the script in a TemplateVM but only activate it on specific hosts (e.g., sys-net).

File: /etc/systemd/system/qubes-opensnitch-pipe.service

[Unit]
Description=Qubes OpenSnitch Pipe
After=graphical-session.target
# Only start on these specific VMs:
ConditionHost=|sys-net
ConditionHost=|sys-usb

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules
Restart=on-failure
KillMode=process

[Install]
WantedBy=default.target

Note: The -sp flag adds a prefix to the rules directory (e.g., /rw/config/opensnitchd/rules.sys-net), allowing different rule sets for different VMs.

Option B: User Service (AppVM Specific)

If you prefer configuring per AppVM (requires sudo privileges):

File: ~/.config/systemd/user/qubes-opensnitch-pipe.service

[Unit]
Description=Qubes OpenSnitch Pipe
After=graphical-session.target

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules
Restart=on-failure
KillMode=process

[Install]
WantedBy=default.target

Enable it: systemctl --user enable --now qubes-opensnitch-pipe


4 Likes

I successfully implemented the server employing qubes-opensnitch-piped, but am having difficulty with the client(node) implementation.

First, I modified the following

mkdir -p /rw/config/opensnitch/rules  #in Pre-requisite: Persistence
# changed to
mkdir -p /rw/config/opensnitchd/rules 
# to be consistent with
# File: ~/.config/systemd/user/qubes-opensnitch-pipe.service ...
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules  

But creating this directory in a template (e.g., debian-13-min) will not be inherited by /rw/config/ in the appVM, so I interpreted “TemplateVM” to mean disposable TemplateVM. Creating the directory and symbolic link in a disposable template worked fine. However, enabling the service in the disposable template (appVM option B) with systemctl --user enable --now qubes-opensnitch-pipe throws the following error

~$ systemctl --user enable --now qubes-opensnitch-pipe
Failed to enable unit: Unit /home/user/.config/systemd/user/default.target.wants/qubes-opensnitch-pipe.service does not exist

Running the qubes-opensnitch-pipe script manually also throws an error (see below), notably ending with:
FileNotFoundError: [Errno 2] No such file or directory: '/rw/config/opensnitchd/rules.<appVM/disposableTemplate name>'.
I verified the existence of /rw/config/opensnitchd/rules, so I’m not sure what to make of this error.

error details (manual)
user@dvm-test-node-8:~$ /usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules
Traceback (most recent call last):
  File "/usr/local/bin/qubes-opensnitch-pipe", line 179, in <module>
	args = parser.parse_args()
  File "/usr/lib/python3.13/argparse.py", line 1912, in parse_args
	args, argv = self.parse_known_args(args, namespace)
				 ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/argparse.py", line 1922, in parse_known_args
	return self._parse_known_args2(args, namespace, intermixed=False)
		   ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/argparse.py", line 1951, in _parse_known_args2
	namespace, args = self._parse_known_args(args, namespace, intermixed)
					  ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/argparse.py", line 2202, in _parse_known_args
	start_index = consume_optional(start_index)
  File "/usr/lib/python3.13/argparse.py", line 2126, in consume_optional
	take_action(action, args, option_string)
	~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/argparse.py", line 2012, in take_action
	argument_values = self._get_values(action, argument_strings)
  File "/usr/lib/python3.13/argparse.py", line 2532, in _get_values
	value = self._get_value(action, arg_string)
  File "/usr/lib/python3.13/argparse.py", line 2565, in _get_value
	result = type_func(arg_string)
  File "/usr/local/bin/qubes-opensnitch-pipe", line 105, in rules_dir_prefix
	raise FileNotFoundError(
		errno.ENOENT, os.strerror(errno.ENOENT), path)
FileNotFoundError: [Errno 2] No such file or directory: '/rw/config/opensnitchd/rules.dvm-test-node-8'

# but... verifying existence
user@dvm-test-node-8:~$ ls -la /rw/config/opensnitchd/
total 12
drwxr-xr-x 3 root root 4096 Dec 11 09:16 .
drwxr-xr-x 3 root root 4096 Dec 11 09:16 ..
drwxr-xr-x 2 root root 4096 Dec 11 09:16 rules

Note: I also edited the OP to correct a small syntax error with qvm-tag.

When you use -sp /rw/config/opensnitchd/rules it tries to use /rw/config/opensnitchd/rules.[hostname] directory as a source for the rules. If the host is disposable the directory is /rw/config/opensnitchd/rules.disp

  -sp, --rules-prefix RULES_PREFIX
                        rules directory prefix (".$hostname" will be added to
                        this path)

These rules will get copied to real rules directory which by default is /etc/opensnitchd/rules which may be symbolic link to /rw/config/opensnitchd/rules.

  -d, --rules-dest RULES_DEST
                        define OpenSnitch rules directory (default:
                        /etc/opensnitchd/rules)

So if I understand correctly, what is missing is the per-host rules source directory:

  • /rw/config/opensnitchd/rules.$(hostname) for AppVMs, or
  • /rw/config/opensnitchd/rules.disp for DisposableVMs

This directory is intended to contain all rules for that host (or all disposables), and its contents are copied to the actual OpenSnitch rules directory when the script runs.

E: Which in your case is No such file or directory: '/rw/config/opensnitchd/rules.dvm-test-node-8' based on that error. I probably should make those errors more friendly and clear and the whole script more robust overall.

I did edit the original post, the original configuration directory was ment to be in /rw/config/opensnitchd but sure it could be configured differently too.

Correct, in this case for the appVM following option B. I was able to make progress running the script without the -sp flag. Seems like the problem lies with the script parsing the hostname, because it’s not creating the new /rw/config/opensnitchd/rules.<hostname>/ path and fails at:

Script is not meant to create that directory but rather expects it to exist already. Using -sp is meant to help for example when -dvm template has rules for multiple disposables. If you only use it for one AppVM, you don’t necessarily need to use that at all. Source directory can be empty but it’s expected to exist.

Like I said earlier, I probably do have to improve those errors and make script more robust but if you just create that source directory for those rules, it should work.

For example, I have system-dvm which is default disposable template for sys-* -services. In /rw/config/opensnitchd directory I have:
rules rules.sys-net rules.sys-usb subdirectories.

rules.sys-net has rules for sys-net and rules.sys-usb has rules for sys-usb.

I run the script with systemd: /usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules

When started on sys-net, it will copy rules from rules.sys-net to rules directory. That source directory is expected to exist and is later probably hodling your rules for that host which are then copied to use.

1 Like

Good to know. Since the instructions for option B AppVM read

#Prerequisite
mkdir -p /rw/config/opensnitchd/rules
...
#Option B
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules

I assumed the script must handle creating the directory with .<hostname> appended.

Lots to take in, but I can already appreciate how valuable this service will be in making Opensnitch Nodes viable within Qubes. Really great work on your part!

1 Like

Thank you. I’m just happy if it’s useful.

Option A had this note which is not mentioned in Option B which could have helped but it probably should be expressed some better way to make it more clear.

I just tried to fit in different kind of examples to give some idea how it could be used which then just ended up making instructions more unclear and less easy to follow.

Explaining your set up was very helpful. I’d definitely appreciate having concrete examples of how your system is working. Referencing this while working through my own setup would be invaluable.

1 Like