Qubes OS 4.2 nftables / nft firewall guide

Hello, I’m desperately trying to configure nftables and I would really appreciate your help

I’m trying to setup VNC connection via LAN with another physical debian device

It was pretty simple with iptables:

VNC server on physical device → AppVM with VNC client → netVM which sets LAN connection → sys-firewall

  1. connect both devices with LAN cable
  2. connect ethernet controller to netVM
  3. enable options “provide network” & “network-manager”
  4. create ethernet connection in network-manager, IPv4 Settings - Share to other computers
  5. check if it works on physical device by command “ip a” and try to use browser → if everything is ok, you now have an internal ip address & internet works in browser

First problem appeared on this stage. It worked flawlessly with iptables (you can see I don’t even interact with it now), but it simply doesnt work with nftables.

However, I was able to solve it by doing this:

nft flush ruleset

table ip filter {
	chain output {
		type filter hook output priority 100; policy accept;
	}

	chain input {
		type filter hook input priority 100; policy accept;
	}

	chain forward {
		type filter hook forward priority 100; policy accept;
	}
}

That is absolutely not elegant solution, but it works. I would like to hear what is the right approach for it.

Next problem:

Now in netVM I would use a single iptables rule:

iptables -I FORWARD -i vif+ -o *ethernet controller name* -j ACCEPT

Then I would simply go to AppVM and ssh to physical device with command:

ssh *internal ip of physical device* -L 9901:localhost:5901

And that’s it. Then I could use VNC client and finish my setup.

So, now I’ve tried to use iptables-translate and got this:

insert rule ip filter FORWARD iifname "vif*" oifname "ethernet controller name" counter accept

Unfortunately, it doesnt work and ssh simply gives up on connection with “connection timed out”.

I’m stuck on this stage because nftables is too hard for me (I was barely able to use iptables), I have zero idea how Qubes firewall works now and honestly very disappointed with this decision.

I don’t understand where the VNC server is running, in a qube or on another computer not running qubes os ?

why don’t you use the default sys-net qube for networking?

if I understand, you use ssh from an AppVM to connect to the remote VNC server, not running Qubes OS, to make a tunnel to connect to VNC in the AppVM? If so, you should not have to do anything with nftables because you have no inbound connection within Qubes OS.

1 Like

That is absolutely not elegant solution, but it works. I would like to hear what is the right approach for it.

The right approach is to clarify the actual goal first, then proceed from there.

What you show allows all connections on the input, forward and output hooks for IPv4. Try setting policies to drop and add log prefix "Dropped in <chainname>: " in each chain. Run journalct -kf in a console in your firewall qube and watch what happens when you try to connect. That will give you an idea what you need to allow selectively.

Check also:

https://wiki.nftables.org/wiki-nftables/index.php/Ruleset_debug/tracing

I’m stuck on this stage because nftables is too hard for me (I was barely able to use iptables), I have zero idea how Qubes firewall works now and honestly very disappointed with this decision.

Networking is a huge subject. Nobody can grasp it in a few minutes. Learning step by step is the only way.

Qubes firewall is a set of tables and chains. You should not modify anything you don’t understand. Chains custom-input and custom-forward are the ones where you generally put your own rules.

3 Likes

Thank you for your answer!

I don’t understand where the VNC server is running, in a qube or on another computer not running qubes os ?

VNC server is running on another non-qubes computer

why don’t you use the default sys-net qube for networking?

I guess it might increase security somehow and it really shouldn’t be an issue

if I understand, you use ssh from an AppVM to connect to the remote VNC server, not running Qubes OS, to make a tunnel to connect to VNC in the AppVM?

Right, however that remote VNC server is in the same network as AppVM (via LAN).

If so, you should not have to do anything with nftables because you have no inbound connection within Qubes OS.

Well, I’ve found a solution directly with nftables.

All I had to do is:

nft add rule ip qubes input iifname “ethernet controller name" accept

and to delete one string from table created by Network Manager:

table ip nm-shared-*ethernet controller name* {

chain filter_forward {
		oifname “ethernet controller name" reject – **delete this**

That is all, now my setup is working as usual. That might be useful for anyone who wants to set up LAN connection in Qubes OS.

Anyways, now I’m stuck at another problem and would appreciate any help:

I’m using redsocks and have no problem making it work inside netVM. However, I also want to route all traffic of AppVM connected to that netVM via redsocks.

Previously, I would use this:

# Any tcp incomming connection on vif+ should be redirected to REDSOCKS chain
sudo iptables -t nat -A PREROUTING --in-interface vif+ -p tcp -j REDSOCKS

# Any incomming tcp connection from port 12345 to vif+ should be accepted
iptables -I INPUT -i vif+ -p tcp --dport 12345 -j ACCEPT

Now I’ve tried to create chain prerouting in my custom table:

table ip nat {
	chain REDSOCKS {
		# hook to the output
		type nat hook output priority 0; policy accept;
		# skip if the user is not uid 1000
		ip protocol tcp skuid != 1000 return
		# skip for local ip ranges
		ip daddr 0.0.0.0/8      return
		ip daddr 10.0.0.0/8     return
		ip daddr 100.64.0.0/10  return
		ip daddr 127.0.0.0/8    return
		ip daddr 169.254.0.0/16 return
		ip daddr 172.16.0.0/12  return
		ip daddr 192.168.0.0/16 return
		ip daddr 198.18.0.0/15  return
		ip daddr 224.0.0.0/4    return
		ip daddr 240.0.0.0/4    return
		# everything else tcp = redirect to redsocks
		ip protocol tcp redirect to 12345
	}
	chain prerouting {
		# hook to the output
                type filter hook prerouting priority raw; policy accept;
                iifname "vif+" ip protocol tcp counter jump REDSOCKS
        }
}

However, I get this error - Could not process rule: Operation not supported with REDSOCKS marked as incorrect part of the string.

As per nftables wiki,

only jump and goto actions to regular chains are allowed.

Regular chains = chains without hook string. REDSOCKS is not a regular chain.

The problem is that I can’t delete this hook in REDSOCKS - type nat hook output priority 0; policy accept; because it breaks redsocks and even then I’m still getting the same error. I have no idea how to do this without jump, but I can’t use it.

What is the right way to adapt this to nftables?

I don’t understand why did you have any problem with outgoing connection from your AppVM to the VNC server in LAN and why was it necessary to allow incoming connections from your LAN.

Regarding your problem with redsocks, I can suggest you to take a look at sing-box instead of redsocks.

As per nftables wiki,

only jump and goto actions to regular chains are allowed.

Regular chains = chains without hook string. REDSOCKS is not a regular chain.

The problem is that I can’t delete this hook in REDSOCKS - type nat hook output priority 0; policy accept; because it breaks redsocks and even then I’m still getting the same error. I have no idea how to do this without jump, but I can’t use it.

What is the right way to adapt this to nftables?

You can add another chain and have both prerouting and REDSOCKS jump to it, e.g.

table ip nat {
	chain REDSOCKS {
		# hook to the output
		type nat hook output priority 0; policy accept;
		jump @another
	}
	chain prerouting {
		# hook to the output
		type filter hook prerouting priority raw; policy accept;
		iifname "vif+" ip protocol tcp counter jump @another
	}
	chain another {
		# skip if the user is not uid 1000
		ip protocol tcp skuid != 1000 return
		# skip for local ip ranges
		ip daddr {
			0.0.0.0/8
			10.0.0.0/8
			100.64.0.0/10
			127.0.0.0/8
			169.254.0.0/16
			172.16.0.0/12
			192.168.0.0/16
			198.18.0.0/15
			224.0.0.0/4
			240.0.0.0/4
		} return
		# everything else tcp = redirect to redsocks
		ip protocol tcp redirect to 12345
	}
}

I have also optimized it a little. You can improve it further with a named set (see wiki). Instead of iifname "vif+" you can use iifgroup 2 (check the output of ip a).

Thanks @qubist

I had to return local ip ranges to its previous state, because it would give me an error - Error: syntax error, unexpected / for all these addresses. Also had to remove @ in jump @another for the same reason

However, this setup doesn’t work - Error: Could not process rule: Opeartion not supported ip protocol tcp redirect to 12345

As far as I understand, chain can’t use this rule without having an appropriate hook. I’ve tried to move this rule to chain REDSOCKS, but it doesn’t work. When I set a hook to chain another, it gives me an error which we started from - can’t jump to chain with hook string.

My bad. I forgot the commas in the set.
Try this:

#...
		ip daddr {
			0.0.0.0/8,
			10.0.0.0/8,
			100.64.0.0/10,
			127.0.0.0/8,
			169.254.0.0/16,
			172.16.0.0/12,
			192.168.0.0/16,
			198.18.0.0/15,
			224.0.0.0/4,
			240.0.0.0/4
		} return
#...

Of course, the @ is bad syntax too. I guess my workday has been too long and my attention is slipping away. Sorry about that.

Try moving the redirect rule to the prerouting chain (after the jump). If it still doesn’t work for some reason (which I am unable to test right now), check the wiki:

https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)#Redirect

Unfortunately, the same error when I move it to prerouting chain. Checking the wiki.

This gives no error for me:

#!/usr/sbin/nft -f

table ip nat {
        set local_ip_ranges {
                typeof ip daddr
                flags interval
                auto-merge
                elements = {
                        0.0.0.0/8,
                        10.0.0.0/8,
                        100.64.0.0/10,
                        127.0.0.0/8,
                        169.254.0.0/16,
                        172.16.0.0/12,
                        192.168.0.0/16,
                        198.18.0.0/15,
                        224.0.0.0/4,
                        240.0.0.0/4
                }
        }

        chain another {
                # skip if the user is not uid 1000
                ip protocol tcp skuid != 1000 return
                # skip for local ip ranges
                ip daddr @local_ip_ranges return
        }

        chain REDSOCKS {
                # hook to the output
                type nat hook output priority 0; policy accept;
                jump another
                # everything else tcp = redirect to redsocks
                ip protocol tcp redirect to 12345
        }

        chain prerouting {
                # hook to the output
                type filter hook prerouting priority raw; policy accept;
                iifgroup 2 ip protocol tcp counter jump another
        }

I don’t know what you are actually trying to do with this but it doesn’t do much per se. ‘another’ chain has no effect whatsoever - it will return with or without those rules. The prerouting chain does not restrict traffic either. The only rule that does something meaningful is the redirect. IOW, you can reduce the whole thing to:

table ip nat {
        chain REDSOCKS {
                type nat hook output priority 0; policy accept;
                ip protocol tcp redirect to 12345
        }
}
1 Like

Thank you for your help, unfortunately, this setup doesnt work too.

I don’t know what you are actually trying to do with this

Its quite easy to reproduce my problem. Basically, install redsocks in netVM and then you need to somehow make it work in AppVM connected to this netVM.

Without custom rules that I’m trying to set in nftables, AppVM will get “non-redsocks ip address” even with working redsocks in netVM. It was easy to do with iptables, but not now.

Thank you for your help, unfortunately, this setup doesnt work too.

Doesn’t work != doesn’t work as expected.

I merely optimized your ruleset. I am not claiming it is logically correct (although syntactically it now is).

You need to debug the problem. Try using a trace, add logging. Did you try what I suggested earlier - dropping everything and logging packets when trying to use your software of choice? Then allow only what is necessary.

Without seeing and understanding what is actually happening it is all just a raffle.

Its quite easy to reproduce my problem. Basically, install redsocks in netVM and then you need to somehow make it work in AppVM connected to this netVM.

I don’t have the time for this, sorry. I believe it is off-topic too. Perhaps try to open a thread dedicated to your actual problem. Or ask the redsocks community.

Without custom rules that I’m trying to set in nftables, AppVM will get “non-redsocks ip address” even with working redsocks in netVM. It was easy to do with iptables, but not now.

You can try in a test VM:

  1. iptables-translate your old rules one by one
  2. Run the commands it outputs
  3. nft list ruleset to see the nftables rules you need
  4. Use the nftables in your production VM (optionally, optimize them)

Last Friday I finally got around to do in inplace-upgrade from QubesOS 4.1 to 4.2
In my laptop I have a service qube named fileserver. This qube which provides file services to some other App qubes and is currently still based on an old outdated but heavily customized template which I need to replace with something newer in the near future.
After reading your very helpful post here, I modified my old no longer working iptables commands and replaced them with nft commands as described. This worked fine in the forwarding case (usually the qube sys-firewall which I renamed to sys-router) with in my case:

for $CLIENT_IP in .... ; do
        nft add rule qubes custom-forward ip saddr $CLIENT_IP \
                                           ip daddr $FILESERVER accept
        nft add rule qubes custom-forward ip saddr $FILESERVER \
                                           ip daddr $CLIENT_IP accept
done

However for the qube fileserver mentioned above I figured out by inspecting the firewall rules using the command

sudo nft list ruleset

that there is no table qubes and no chain named custom-input. So I decided for the time being to use this command in my /rw/config/qubes-firewall-user-script instead:

nft insert rule ip filter INPUT index 0 ip saddr 10.137.0.255/24 accept

Afterwards my Qubes OS internal networking finally worked again as it used to do before in Qubes OS 4.1
I would like to thank you and everybody involved. Best regards.

You can try using this:

That’s strange.
Maybe your templates didn’t upgrade correctly?
Check what repositories are used in your template and make sure that they use Qubes OS 4.2 repositories.
For debian it should be something like this:

$ cat /etc/apt/sources.list.d/qubes-r4.list
# Main qubes updates repository
deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] https://deb.qubes-os.org/r4.2/vm bookworm main
#deb-src [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] https://deb.qubes-os.org/r4.2/vm bookworm main

# Qubes updates candidates repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] https://deb.qubes-os.org/r4.2/vm bookworm-testing main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ]  https://deb.qubes-os.org/r4.2/vm bookworm-testing main

# Qubes security updates testing repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] https://deb.qubes-os.org/r4.2/vm bookworm-securitytesting main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] https://deb.qubes-os.org/r4.2/vm bookworm-securitytesting main

# Qubes experimental/unstable repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] https://deb.qubes-os.org/r4.2/vm bookworm-unstable main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] https://deb.qubes-os.org/r4.2/vm bookworm-unstable main


# Qubes Tor updates repositories
# Main qubes updates repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm main

# Qubes updates candidates repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-testing main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-testing main

# Qubes security updates testing repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-securitytesting main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-securitytesting main

# Qubes experimental/unstable repository
#deb [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-unstable main
#deb-src  [arch=amd64 signed-by=/usr/share/keyrings/qubes-archive-keyring-4.2.gpg ] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.2/vm bookworm-unstable main

If the destination is a standalone qube created from an ISO (based on debian bookworm), how can the nft rule be applied, or doesn’t that matter in such a case? I can make connections to the outside from that qube, but is doesn’t have qubes internals such as an /rw folder. I added the script to the sys-firewall qube with appropriate IP & port info.

Trying to setup an ssh connection between 2 qubes with that being one of them. That qube is totally isolated from other qubes but I at least need to access files in it. I thought a limited ssh, pubkey only connection would satisfy my security needs.

I’m using qubes v4.2

I’m not clear on what you are asking, if you are asking anything.
If you want to set up an ssh connection between two qubes, it doesnt
matter if one is a standalone or HVM. The inter-qube firewall rules are
set on the upstream shared netvm as in the docs
You have to set firewalling on the standalone to allow inbound SSH
traffic.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

My reply was to Selene’s followup when she said

In the destination qube, don’t forget to allow the port using nft add rule qubes custom-input tcp dport 8000 accept

I got an error when I used that nft command inside my destination qube (on my 5 digit port, not 8000). I presume that’s b/c there is no qubes rule to change. Not familiar with these nft / iXtables low level network commands. Should that “nft add” rule work on a standalone qube created from an ISO that knows nothing about qubes, or is the “qubes” in that command an arbitrary name?

There must be a rule in qubes sys-firewall that needs to be changed, but she said “In the destination qube”. There are no incoming restrictions when that ISO is installed on bare metal, so it seems to me it’s qubes rules / policy / sys-firewall that is doing the restriction. Selene’s instruction makes sense to me if the destination qube is based on a Qubes template.

You need to add this rule in destination qube if it’s based on a template because by default the firewall in these qubes is blocking incoming connections.
You have standalone not based on a template and I think the Debian 12 installed with default configuration have its firewall configured to allow all connections so you don’t need to add any firewall rule in it.
What you need to do is to add the firewall rule in the net qube to which your source and destination qubes are connected according to the guide linked by @unman.

The rule in sys-firewall is the one set out in the docs.
You are right - if the HVM is installed from iso then there will be no
qubes table, so that nft command is no go.
You will need to identify if there are any nft tables (nft list tables), and if there are any restrictions on inbound traffic insert a
rule which allows inbound SSH. If you need help with this give some
details of the nft tables.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

Basically, the firewall rules done through the Qube settings manager in the firewall tab are applied on the netvm of that qube, so as long as this netvm is using an official template the rules will be applied.

As said by other, you will have to figure if your custom qube has a firewall enabled or not by default (so far, I think only Fedora block all incoming connections by default), if there is a firewall, you will need to open the ports you need from the qube.