Does my nftables firewall drop packets that I want forwarded?

I’m having difficulty getting AppVM’s X and Y networking via firewall F, using nftables instructions from here . I can ping both ways from F to either X or Y, but can’t ping or netcat from X to Y. tcpdump on F when pinging from X to Y shows incoming echo requests, like so:

21:59:18.097468 vif60.0 In  IP 10.137.0.47 > 10.137.0.56: ICMP echo request, id 5, seq 1, length 64

But no outgoing echo request like when pinging from F to Y, like so:

22:02:03.613687 vif59.0 Out IP sys-firewall-persist > 10.137.0.56: ICMP echo request, id 2, seq 2, length 64

22:02:03.614384 vif59.0 In  IP 10.137.0.56 > sys-firewall-persist: ICMP echo reply, id 2, seq 2, length 64

I have rules in chain custom-forward that seem to allow forwarding from X to Y:

nft list chain qubes custom-forward | grep 47
		ip saddr 10.137.0.56 ip daddr 10.137.0.47 ct state established,related,new counter packets 0 bytes 0 accept
		ip saddr 10.137.0.47 ip daddr 10.137.0.56 ct state established,related,new counter packets 0 bytes 0 accept

I’m wondering if my packages get dropped before even getting to custom-forward: When looking at nft list ruleset, I see

[root@sys-firewall-persist config]# nft list ruleset | grep -B1 -A4 'chain forward'

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
--

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
--
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}
--
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

The jump to custom-forward is only included in the qubes table, but not in the qubes-firewall table. The qubes-firewall chain has the same priority as the qubes table chain, and has a ‘drop’ policy. Could it be that a non-established/related packet just gets dropped by the table qubes-firewall chain forward policy before the table qubes chain forward would jump to custom-forward? If so, is that a bug or a feature, and if feature how do I network between X and Y?

The documentation is here Firewall | Qubes OS

You allowed packets to pass between X and Y through F, but did you open according ports on X and Y?

By default, all incoming ports are blocked so for instance, if you run a web server on port 80 on X and you want Y to reach it, you have to open the firewall on X.

Thanks for answering!
tcpdump on the target AppVM shows nothing at all, and tcpdump on F only shows an incoming echo request, not an outgoing. Not sure if that’s expected for forwarding, in which case it would seem that the icmp packets don’t get forwarded.

To answer your question, running on Y:

# ip addr | grep 10.137.0 ; nft list chain qubes custom-input
    inet 10.137.0.56/32 scope global eth0
table ip qubes {
	chain custom-input {
		ip saddr 10.137.0.47 ct state established,related,new counter packets 0 bytes 0 accept
		ip saddr 10.137.0.57 ct state established,related,new counter packets 2 bytes 168 accept
	}
}

Those are two rules that I meant to allow requests from X and F.
So X=47, F=57, and Y=56

And on X:

# ip addr | grep 10.137.0 ; nft list chain qubes custom-input
    inet 10.137.0.47/32 scope global eth0
table ip qubes {
	chain custom-input {
		ip saddr 10.137.0.56 ct state established,related,new counter packets 0 bytes 0 accept
		ip saddr 10.137.0.57 ct state established,related,new counter packets 3 bytes 252 accept
	}
}

If I understand correctly, the order in which two chains from different tables with the same hooks are executed is determined by their priority, but if the priority is equal (“filter” in this case) then the order is unpredictable. So it could be that qubes-firewall gets executed before qubes, maybe new packets from interface vif60.0 get dropped there before getting a chance to jump to custom-forward.

you don’t need this in Y, qubes already allow to receive packets from their netvm, otherwise network would not work. This does not mean tcp/udp/whatever ports are opened for it.

could you describe exactly how you check that network is correctly passing through from X to Y?

I added the extra incoming accept in order to enable pinging from F to X and Y, which didn’t work without it.

The tracing of packets in F I did with the statement tcpdump -i any, which when pinging from X to Y shows the echo request getting into F, but doesn’t show it leaving F, and presumably therefore it never arrives at Y. The output of tcpdump -i any on F when pinging from X to Y is the line I mentioned above, here’s another one:

23:07:26.611909 vif60.0 In  IP 10.137.0.47 > 10.137.0.56: ICMP echo request, id 6, seq 3, length 64

I figured out a part of the puzzle I think.

On the destination qube you are trying to ping, you need to allow icmp:

nft add rule qubes custom-input ip protocol icmp counter accept

I made a little setup on my system:

  • sys-firewall has the rule: sudo nft add rule ip qubes custom-forward ip saddr 10.137.0.22 ip daddr 10.137.0.56 ct state new,established,related counter accept
  • the qube 10.137.0.22 has the rule nft add rule qubes custom-input ip protocol icmp counter accept
  • from 10.137.0.56, I can run ping 10.137.0.22 and replies work

It would seem that X & Y already accept icmp packets since I can ping from F to X and Y?

If what I suspect is true, the behaviour would be non-deterministic, since the priority of the qubes and qubes-firewall forward chains is undetermined. The chain from the one table would seem to drop my packet (being new and from interface vif60.0), whereas the chain from the other table would jump to custom-forward and accept the packet. If you run

nft list ruleset | grep -B1 -A4 'chain forward'

, do you also have a forward chain in both the qubes and the qubes-firewall table? And if you apply the qubes-firewall forward chain rules, would you also end up dropping the packet to be forwarded? (I.e. the package being new and from the wrong interface)?

→ I need to go sleep, will get back here tomorrow

On my system, with default firewall rules, I can’t get a ping to work between a qube and its netvm.

Yes, default also doesn’t allow that on my side, see also the earlier remark about the extra line in the AppVM’s custom-input chain that enabled pinging F<->X and F<->Y on my side.

Question: When you successfully ping X<->Y, and tcpdump on the firewall, do you see two lines for the echo request? I.e. one line coming in from X to F and one line going out from F to Y? I’m asking because if that’s the case then that would be evidence that my F doesn’t forward, since it only shows the incoming request, not the outgoing, so that the problem is localized in F, not X or Y, in which case the following is suspicious on my side:

# nft list chain qubes forward; nft list chain qubes-firewall forward
table ip qubes {
	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}
}

The forward chain in the qubes-firewall table looks like it would drop a new packet if it runs before the forward chain in the qubes table. I’ll just go and change that drop policy to enable and see if that changes anything, this is a test setup anyway

Confirmed: If I replace the drop policy with enable, then my ping X->Y works. Filed an issue about those chain priorities Suspicious nftables chain priority seems to cause undesired forward packet dropping · Issue #9711 · QubesOS/qubes-issues · GitHub

In your qubes-firewall table, I don’t see the X and Y qubes listed in the forward chain, which is odd, unless you removed them before posting?

When a qube is attached to a net qube, a new chain is created with its firewall rules, and a jump rule is created in the forward chain. Neither of these appear in your output.
For example, I have this in a similar setup:

table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
		ip saddr 10.137.0.46 jump qbs-10-137-0-46
		ip saddr 10.137.0.43 jump qbs-10-137-0-43
	}
}

So did you remove those rules from the output or is it really like that? If you didn’t remove anything, can you detail your setup? Are X and Y using F as their net qube?

Hi, no I didn’t remove them, I didn’t get them when connecting the AppVMs because I didn’t start my qubes-firewall-user-script with #!/bin/sh , see the elaboration on the github issue above.