DNS is not blocked by qvm-firewall. Why?

Hi, I have a simple qube constellation to enforce LAN-only traffic:

myappvm --(netvm)--> sys-firewall-lan-only --(netvm)--> sys-net

Specifically I’d like to block DNS completely and instead handle domain names in /etc/hosts of myappvm manually. To do that, sys-firewall-lan-only has these qvm-firewall rules:

$ qvm-firewall sys-firewall-lan-only list
0   drop    -                 -         -        dns             -          -       -
1   accept    -         -        -               -          -       -
2   drop    -                 -         -        -               -          -       -

(last probably not needed, but better safe than sorry)

From what I learned, myappvm uses its netvm as virtual gateway to send DNS requests. And the last netvm in this chain forwards it to the “real” DNS resolver.


  • If myappvm visits qubes-os.org, DNS request gets blocked.
  • If myappvm visits server.lan.domain and /etc/hosts maps it to, I am able to visit site.

But first assumption is not the case!

Some of you know, vanilla Firefox is quite chatty with default configuration. If I start Firefox in myappvm, it successfully creates DNS requests for facebook.com, reddit.com, a.ka the whole tracking shitshow (verified by LAN DNS monitoring tool).

More specifically, I cannot visit those sites, as the IP range is blocked. But the DNS request goes through, despite qvm-firewall rule 0 drop - - - dns.

Why is that? Seems like a bug of Qubes firewall from user perspective.

Thanks for any help

Update: If I directly set a blocking rule in myappvm:

qvm-firewall myappvm add --before=0 action=drop specialtarget=dns

, DNS gets successfully blocked. But I had hoped to create one single netvm that automatically enforces this rule.

So why does DNS not getting blocked, when the rule is created for the netvm?

1 Like

In what qube did you check it?

I reproduced this, cloning sys-firewall and setting it as the netvm for a disposable.

The problem seems to be that specialtarget creates rules blocking packets to the real DNS IPs, but I guess the chain runs before those IPs have been set by DNAT, so it fails to match the actual packets.

If I run instead:

qvm-firewall sys-firewall-lan-only add action=drop proto=udp dstports=53 --before=0
qvm-firewall sys-firewall-lan-only add action=drop proto=tcp dstports=53 --before=0

then DNS packets from myappvm do get blocked.

You may want to edit nftables directly on sys-firewall-lan-only instead. Qubes firewall documentation gives some detail on how to get started.

1 Like

I set up a Pi-hole instance (network adblocker) in home network, so outside Qubes. Hence sure that DNS requests really have passed through

Maybe you’ve configured Firefox in myappvm to use your Pi-hole using DoH/DoT/DNSCrypt/etc?
Can you try to resolve the hostname using e.g. ping/curl/dig in myappvm?
I’ve just tried it myself and DNS was blocked when I use dig in myappvm. I’ve verified it using Wireshark in sys-net (netvm of sys-firewall-lan-only).

Thanks @nokke for your findings, appreciate it! Will try your workaround.

Strange, since i even used a wildcard block in above statement, without naming any IPs.

That brings up some questions:

  1. Is this a intended limitation or a bug? I tend to latter, so I’d raise an issue on GitHub.
  2. If dstports=53 works, why use specialtarget anyway? I would add the port 853 for DNS over TLS (DoT) as well.

Even my use case is based around blocking of tracking, I think qvm-firewall overall would be the cleaner solution, as independent of qubes configuration (and also in case of security issues).

I couldn’t reproduce this again reliably. I now suspect I’m seeing a timing or caching issue. Sometimes the specialtarget rule is enough to block the traffic.

I’m having a hard time with tcpdump (kernel drops packets, delay in printing to screen), so couldn’t confirm what’s happening at the network level.

Checked the nftables documentation, and this definitely shouldn’t be the case.

1 Like

I was using ping to test, and didn’t realize this does use cached DNS results if available. Testing with dig shows the rule working as expected without timing issues for me.

Are you seeing DNS packets at the Pi-hole or just unwanted traffic?

I used all default settings for Firefox, assuming vanilla Firefox wouldn’t be able to phone home. As a side note, just noticed that Firefox now uses the default: “Enable secure DNS using:” “Default Protection”, described as “Firefox decides when to use secure DNS to protect your privacy.” Turned that “Off” to use my own resolver, unfortunately same result.

Hm a bit buffled, that we have different behavior. There shouldn’t be much variation in the configuration - take any vanilla appvm, let it use LAN-only netvm, if I am not mistaken. LAN vm should block DNS via

qvm-firewall sys-firewall-lan-only add --before=0 action=drop specialtarget=dns

(neglecting IP blocks). I am not used to tcpdump and Wireshark, but maybe I should try that as well.

Good idea. I made fresh debian disposable (let’s name it debian-disp) with default netvm sys-firewall. debian-disp has default qvm-firewall rules. Installed dig via apt install dnsutils. Then switched to netvm sys-firewall-lan-only and invoked dig qubes-os.org or dig <some online website that should not be reachable>. Again, I get fresh DNS requests on my Pi-hole :thinking:.

Yes, I see all manual and automatically started DNS requests in my Pi-hole interface coming from myappvm. The thing is, Pi-hole shouldn’t even notice any request, as it should already be blocked from inside Qubes.

1 Like

What do you see in the dig output?
If you dig somerandomstringskhdus.com then do you see this DNS request to this specific domain in your Pi-hole log?

I saw resolved IP address for said domain, same as every normally processed request. And yes, it created a new entry corresponding to requested domain.

But wait a sec: I gave netvm sys-firewall-lan-only a fresh restart - now it strangely seems to work as expected, with dig resulting in:

;; communications error to host unreachable

Are there known consistency issues with qvm-firewall, perhaps also under specific conditions? This would be inline with @nokke 's current experiences.

I know, that at some I point I assigned sys-firewall-lan-only to myappvm, which got me a prompt that it is not started yet, so it had been ad-hoc started from GUI dialog.
At least I never had the need to restart a netvm after having applied qvm-firewall rules - these always were immediately active.

Regardless conditions, firewall rules always should remain consistent and block otherwise like a kill-switch. I definitely need to observe this behavior more.

1 Like

I don’t know what could have caused this issue for you. I didn’t encounter such issues with qvm-firewall.
You can try find a way to reproduce the issue so that it’d be possible to fix it.

1 Like

My experiences are explained by DNS caching.

If something has killed qubes-firewall on the netvm, that would cause this behavior. Restarting the qube will restart the script.

The flow is:

  • qvm-firewall on dom0 writes QDB entries
  • qubes-firewall on netvm watches QDB for changes
  • qubes-firewall runs nft when it sees QDB changes

So qvm-firewall doesn’t block on nft actually running, and there’s room for some edge cases.

1 Like

Thanks again for explaining this flow @nokke !

This sounds like a major design flaw. I much more would prefer, if firewall blocks everything and forces me to restart netvm in case of failures than potential inconsistent behavior that provokes leaks. At least a choice for “strict mode” or something similar to kill switches (whose name was made popular with regards to enforced VPN mode).

You’re not the first to consider such things a design flaw, but the team’s clear that the main design goal of the firewall is against user error rather than leaks.

Wow, that is surprising and very debatable especially for a security-oriented operating system. Not wanting to repeat such a discussion, but I am convinced that user experience would not need to suffer - by providing proper notifications. Also given that these edge cases seem to be rare, I’d think the risks regarding worst case scenario for leaks weight much higher than a notification about a needed restart (of course depending on the usage scenario and threat model).

The least thing I would expect is a (graphical) user notification in case of firewall errors. I’d consider it essential error reporting, and not about a trade-off between user experience and leak-proof system. We should strive to make both possible in a reasonable manner.

I really love the overall qvm-firewall design. The more it saddens me to read, this doesn’t seem to be a fully trustworthy implementation.

Here’s the explanation


Another consideration - you could add sys-firewall into the path:

myappvm --(netvm)--> sys-firewall-lan-only --(netvm)--> sys-firewall --(netvm)--> sys-net

So that you have the firewall separated from other sys-net functions. See this for comparison.

Interesting, the text directly states that firewall is not leak-proof. But in subsequent paragraphs only covert channel attacks are named as the reason. My case is about timing/consistency issues of firewall rules, which by chance won’t applied in case of side effect errors, right?

Not sure, how this could be beneficial for consistency. Firewall rules of sys-firewall-lan-only are already enforced by upstream netvm sys-net (the one with LAN controller) in my case.

1 Like