[guide] how-to setup a sys-dns qube

@zerosums issue sounds suspiciously similar to what I was dealing with. @qubist if it stopped working for you too, as it did me, my first guess is that blocking dnscrypt from making external dns requests before the service is running is breaking. The solution would be to manually dl the resolvers for the initial start up and move them around with /rw so the service can run, after that long running qubes/services should be able to update themselves.

@qubist if it stopped working for you too, as it did me, my first guess is that blocking dnscrypt from making external dns requests before the service is running is breaking.

For me, it doesn’t work because of the switch from iptables to nftables. I just haven’t had the time to fix it, so I decided to leave it for later, as I was planning to recreate the whole thing from scratch.

So, that “later” came and I am working on a new guide. As soon as it is ready, I will share it (perhaps in a new thread).

BTW, I am reconsidering a change in the network infrastructure.

Instead of:

[network uplink]
 └── sys-net
     └── sys-firewall
         ├── sys-dns
         │   └── sys-wall
         │       └── [clearnet qubes]
         └── sys-whonix
             └── [torified qubes]

perhaps it should be possible to have:

[network uplink]
 └── sys-net
     └── sys-firewall
         ├── sys-dns
         ├── [clearnet qubes]
         └── sys-whonix
             └── [torified qubes]

i.e. have only one firewall which will redirect DNS requests to sys-dns which will do its thing and respond accordingly. That won’t affect sys-whonix, as it has its own resolving anyway.

Pros:

  • one less qube (sys-wall)
  • no need to change the netvm of all clearnet qubes

Cons:

  • no easy way to switch to traditional DNS for some qubes (which is currently easy to do by switching netvm from sys-wall to sys-firewall)
  • no per-VM sys-dns + sys-wall, which may be useful for granular DNS-level filtering

So, I am not sure which setup is better.

I ended up removing sys-wall too and setting the default netvm to sys-dns. This doesn’t affect sys-whonix since it’s manually set to sys-filrewall.

I’d be interested in seeing your nft rules as I’ve seen much more sophistication than I can muster.

FWIW; I’m rolling some salt+qubesctl to automate this, if anyone is interested

By compartmentalization, it should be the first option.
Besides, as you mentioned, first option is more flexible.

The structure should be:

[network uplink]
 └── sys-net
     └── sys-firewall
         ├── sys-dns
         │   └── sys-wall
         │       └── [clearnet qubes]
     └── sys-whonix
         └── [torified qubes]

As sys-whonix isn;t using sys-firewall

By compartmentalization, it should be the first option.

What do you mean? The compartmentalization is there in both cases.

Besides, as you mentioned, first option is more flexible.

I think it is possible to have the second option with the same flexibility. One can still have multiple sys-wall and sys-dns qubes without those being chained behind each other. It seems to me inefficient to pass all traffic through the sys-dns qube. This means double processing of the same traffic. sys-dns should do only one thing - DNS resolving, so only DNS traffic should go through it. No?

As sys-whonix isn;t using sys-firewall

Why should sys-whonix be exposed to sys-net without a firewall?

Seconded. I’d like to hear the answer to this question.

sys-firewall does nothing more, the Whonix gateway already has its own firewall. Everything coming from sys-whonix is automatically accepted by sys-firewall, and you can’t restrict traffic only to your guard nodes unless you want to change them when they decide to rotate (between 6 and 12 weeks).

A proper way to “limit” sys-whonix is to use corridor, which is a whitelisting gateway, not a firewall.

sys-firewall does nothing more, the Whonix gateway already has its own firewall.

Which is not really “the qubes way”, is it? Perhaps it should rather be sys-whonix-gateway and sys-whonix-firewall.

Everything coming from sys-whonix is automatically accepted by sys-firewall, and you can’t restrict traffic only to your guard nodes unless you want to change them when they decide to rotate (between 6 and 12 weeks).

But the role of the firewall is not only to restrict outgoing traffic. It usually restricts incoming one (and can have additional functions, e.g. rate limiting etc). In that sense, an attack can be stopped at sys-firewall without reaching sys-whonix. Without sys-firewall that won’t happen.

A proper way to “limit” sys-whonix is to use corridor, which is a whitelisting gateway, not a firewall.

That seems more focused on protecting anonymity, rather than general network attacks.

sys-whonix is a gateway and is there to receive packets from the workstation it’s bundled with (that’s how Whonix is made), that’s why the firewall is on the gateway. So it’s kind of “the Qubes way” if you look at it.

To be honest, if sys-firewall can be broken by an attacker even with the current input drop rule, I don’t see how sys-whonix can do any better after that. By default, all qubes deny incoming traffic unless it’s coming from downstream. sys-firewall’s first role is to limit outgoing traffic, since that’s where all the nftables rules are created on a base system.

It’s a whitelisting gateway for outgoing traffic, but the default qubes firewall still block incoming requests.

sys-whonix is a gateway and is there to receive packets from the workstation it’s bundled with (that’s how Whonix is made), that’s why the firewall is on the gateway. So it’s kind of “the Qubes way” if you look at it.

I may be overthinking it, but, to my mind, the qubes way would be:

  • 1 qube: tor sock proxy
  • 1 qube: gateway
  • 1 qube: firewall
  • 1 qube: dns (for clearnet)
  • 1 qube: tor client (browser, curl, etc)

First 4 qubes - minimal headless.

To be honest, if sys-firewall can be broken by an attacker even with the current input drop rule, I don’t see how sys-whonix can do any better after that.

I am not saying it can/can’t or which may do better.

My point is: In case of an attack, the general principle is to stop it as far/early as possible, not to let it penetrate further inside LAN.

To my mind, sys-whonix’s main function is to prevent clearnet traffic from/to torified qubes. The fact that it drops traffic as default policy does not conflict that. I rather see it as an extra protection through a safe default in case the user messes up something.

By default, all qubes deny incoming traffic unless it’s coming from downstream.

So does sys-net.

sys-firewall’s first role is to limit outgoing traffic, since that’s where all the nftables rules are created on a base system.

Considering sys-net runs nftables too, are you suggesting that we can drop sys-firewall completely and let it handle both routing and firewall?

I am just trying to extrapolate the logic of what you explain and to match it to the actuality that out of the box we have:

sys-net ↔ sys-firewall ↔ sys whonix

and not

sys-net ↔ sys whonix

It’s a whitelisting gateway for outgoing traffic

I am not sure I understand what you mean by outgoing traffic.

If you mean output hook, i.e. packets sent by the local processes, i.e. by sys-whonix iteself, sys-whonix drops that as default policy.

In case by “outgoing” you mean packets coming from a whonix-workstation-based AppVM and destined to reach external host, that is forward hook and sys-whonix drops that by default too, and enforces it through the Tor proxy.

So, in both cases, it is not whitelisting.
Do you probably mean something else?

If it benefits you and your threat model, then it’s a possibility. The way it’s done now is not insecure. The gateway that handles the traffic doesn’t run in the same qube as the originating application, so it’s completely isolated and can’t be manipulated from the workstation. Whonix comes with 2 separate VMs by default, so if it was not secure this way, it would come with more VMs with more process isolation.

I understood your point. I was saying that if sys-firewall had to be compromised while running nothing but the in-vm Qubes stack (meaning sys-net was compromised first in the chain), that would mean sys-whonix wouldn’t stand a chance against the same kind of attack. You imply that putting another qube between sys-net and sys-whonix makes it more secure, but while it looks like it does, that’s not really true in all cases. If the attacker can bypass the drop rules in every single qube, there’s nothing stopping him from compromising sys-whonix next, whether it’s behind sys-firewall or not.

I never implied that. sys-firewall is important because it hosts the nftables rules coming from the qubes connected to it (in case of custom rules, that’s even more important). Without sys-firewall, rules could be deleted in case of a sys-net compromise. sys-whonix here has its own firewall and doesn’t depend on sys-firewall at all, meaning that if it were attached to sys-net, the only rule for it would be “accept”, since it’s already doing all the work (filtering, etc…) on its own.

sys-whonix is attached to sys-firewall by default, as is everything else the installer creates, because the default net qube is sys-firewall. It’s just running salt, which automatically sets the defaults.

sys-whonix sends its packets to corridor (it has it’s own qube and is sys-whonix’s net qube), so it gets everything going out of the gateway (outgoing tor traffic). corridor uses a list of relays and compares the packets to it and sees if the destination is part of the list. If it’s not, it drops the request. That’s why it’s a whitelisting gateway, it drops based on a dynamic list of IPs.

Anyway, I think this discussion is going off topic. It could be split if you want.

Instead of splitting and floating off into something else, let’s rather get back to the reason we are discussing this, i.e the question

Why should sys-whonix be exposed to sys-net without a firewall?

and its relation to the proposed new structure.

My “overthinking list” is not about a specific threat model but about principles (separate functions in separate qubes).

Re. the example attacks you mention:

Case 1:

An attacker can bypass input hook rules in all networked qubes, e.g. by exploiting a bug in nftables.

It wouldn’t really matter if one has 1, 2 or 10 chained sys-* qubes. The outcry of such a bug would be humongous, far beyond Qubes OS or personal computing.

Case 2:

An attacker can modify filter rules only in sys-net, and sys-firewall rules are safe.

Then deliberately connecting to a compromised qube makes even less sense. AFAIK, sys-net is generally distrusted.

sys-whonix is attached to sys-firewall by default, as is everything else the installer creates, because the default net qube is sys-firewall. It’s just running salt, which automatically sets the defaults.

This suggests an oversight, rather than intentional design. I don’t think it is an oversight though, considering the explicit suggestion in the doc:

sys-net <--> sys-firewall-1 <--> network service qube <--> sys-firewall-2 <--> [client qubes]

Consider also the case when one uses several sys-net qubes. Having sys-whonix connected to only one of them would be a problem.

1 Like

I agree that sys-net is not to be trusted at all, since it runs with the network controller attached and uses “insecure” drivers. I was just saying that even if sys-net had to be compromised, the attacker could only remove an “accept” rule. sys-whonix has it’s own firewall and will drop incoming requests unless it is explicitly established first (pinging sys-whonix from sys-net for example returns nothing).

That’s just how the installer does things. It’s up to the user to create such a setup after the fact if they want to. As I said, the default qubes, including the whonix gateway, are created with salt formulas and default to the default net qube, which is sys-firewall at this point. It’s easier to do this than to create a very complicated setup ootb that might be hard to maintain on the salt side and hard for the user to understand.

If a user has multiple sys-nets (custom use case), he will select 1 of them for tor activity, or even create multiple gateways if needed.

Also, don’t get me wrong. I’m not saying it’s a waste of time to have something between sys-net and sys-whonix in all cases, it might prevent an attacker currently in sys-net from continuing his exploitation, but obviously it needs to be a system that is not based on what is already compromised, like a BSD-based qube for example.

I was just saying that even if sys-net had to be compromised, the attacker could only remove an “accept” rule.

If an attacker has gained access to sys-net and can modify firewall rules, he can do much more, e.g. divert selected/all traffic to a particular attacker-owned host, perhaps gain access to other LAN hosts etc.

sys-whonix has it’s own firewall and will drop incoming requests unless it is explicitly established first (pinging sys-whonix from sys-net for example returns nothing).

Suppose sys-net is owned. It can be used for a DoS attack against sys-whonix. Yes, sys-whonix will drop packets but that is still a CPU load on it which can break things. If the attacker is trying to exploit a vulnerability, that may result in deanonymizing traffic, in crashing nftables running in sys-whonix itself, etc. Using a firewall between the two would be yet another obstacle.

but obviously it needs to be a system that is not based on what is already compromised, like a BSD-based qube for example.

I don’t see why that is obvious.
Anyway, we are way off-topic already. :slight_smile:

sys-whonix traffic is encrypted multiple times. If the traffic is rerouted, that’s not a problem (normal qubes also have encrypted traffic, depending on what protocol they use). Even if the traffic is rerouted to some server somewhere, it’s unreadable and the qube won’t get any response, and if it does, the forged request won’t be processed.

In the case of LAN hosts, that’s out of scope. Qubes protects what’s running on its own system, not what’s accessible from sys-net on the same subnet.

For DoS attacks, you don’t need to infect sys-net at all. It could come from another infected qube directly connected to sys-whonix. In your earlier post, you show a part of the doc with a firewall between the client qube and the service qube, which would mean that anon-whonix would have to go through a firewall qube to reach sys-whonix. This firewall is used to filter who can reach sys-whonix, so if anon-whonix is infected, which is likely since the applications are running there, the firewall won’t do anything to protect it because all packets will be accepted and sent to the gateway.

For example, if it’s possible to compromise a qube based on Fedora, a qube based on the same template could be compromised in the same way because they run the exact same programs and services. If you use a different system, the attacker would have to find another way to proceed because the vulnerability might not work on it. This effectively slows down or even stops the attack.

Discard my previous thoughts on this.
I got it wrong.

So I’m on 4.2, is there a consensus on how to do this, or should I just install a fedora-37 template and follow qubist’s guide?

Edit: For the average joe who just wants encrypted dns, take a look at this: Set DNS server per AppVM (DNS over TLS) - #3 by Singer in the meantime. Unfortunately you can only get dns-over-tls working with this, but it is what it is.

Edit 2: OMG why am I so dumb, I never considered you could just run dnscrypt-proxy in an appvm (requires installing it in the template and messing with /rw to gain persistence).

I know this isn’t exactly what the original post hopes for but if you search for dns over https on google, this thread comes up.

So I’m on 4.2, is there a consensus on how to do this, or should I just install a fedora-37 template and follow qubist’s guide?

I am working on an updated guide but it is not ready yet. If you have the time, you can figure the principle from the existing guide and adapt it for your needs.

1 Like

Sharing back some modification I did some time ago:
Switchted from iptables to nftables and disabled systemd-resolved.
My /rw/config/rc.local:

mkdir -p /var/cache/dnscrypt-proxy/
/usr/bin/systemctl disable systemd-resolved
/usr/bin/systemctl stop systemd-resolved

# allow redirects to localhost
/usr/sbin/sysctl -w net.ipv4.conf.all.route_localnet=1
/usr/sbin/nft add rule ip qubes custom-input iifname "vif*" ip daddr 127.0.0.1 udp dport 53 counter accept
/usr/sbin/nft add rule ip qubes custom-input iifname "vif*" ip daddr 127.0.0.1 tcp dport 53 counter accept

# redirect dns-requests to localhost
/usr/sbin/nft flush chain ip qubes dnat-dns
/usr/sbin/nft add rule ip qubes dnat-dns ip daddr 10.139.1.1 udp dport 53 counter dnat to 127.0.0.1
/usr/sbin/nft add rule ip qubes dnat-dns ip daddr 10.139.1.1 tcp dport 53 counter dnat to 127.0.0.1
/usr/sbin/nft add rule ip qubes dnat-dns ip daddr 10.139.1.2 udp dport 53 counter dnat to 127.0.0.1
/usr/sbin/nft add rule ip qubes dnat-dns ip daddr 10.139.1.2 tcp dport 53 counter dnat to 127.0.0.1

# set /etc/resolv.conf and start dnscrypt-proxy
echo "nameserver 127.0.0.1" > /etc/resolv.conf
/usr/bin/systemctl start dnscrypt-proxy.service

If other people confirm that it work for them and see no bug in it, could replace the original config in this guide

This is in fedora-xx-minimal-dns right?