I’m new to Qubes OS and followed @qubist 's guide to set up a sys-dns Qube with dnscrypt-proxy. I used nftables rules from @DVM ’s answer and disabled systemd-resolved as suggested by @buhduh. However, DNS queries from sys-wall are timing out.
Symptoms:
DNS queries reach sys-dns (confirmed via tcpdump).
dnscrypt-proxy is running but not responding to queries.
nslookup gnu.org $(qubesdb-read /qubes-gateway) in sys-wall times out.
What I’ve Tried:
Verified nftables rules in both sys-wall and sys-dns.
Confirmed dnscrypt-proxy is running and listening on 127.0.0.1:53.
My first thought would be that you are not hitting the nftables nat rule by using the gateway IP. It is only forwarding both 10.139.1.1 and 10.139.1.2, which are the base internal Qubes DNS addresses, meaning that all other addresses are hitting the last drop rule. If you replace $(qubesdb-read /qubes-gateway) with 10.139.1.1, does it work?
@DVM After posting, I realized there were some typos and omissions in the initial response. For example, I missed adding elements = { 10.138.10.105 } and elements = { "vif45.0" . 10.138.10.105 } in the nftables rules, and there were typos like using ifgroup instead of iifgroup. I’ve corrected these and updated the response accordingly. Let me know if you need further clarification, Thank you!
This should contain a dnat-dns chain with 2 rules, but the chain doesn’t seem to be there. Check the firewall rules in /rw/config/rc.local and see if there’s a problem there.
Are you able to resolve any domain from sys-dns itself using 127.0.0.1?
bash-5.2# host gnu.org 127.0.0.1
Using domain server:
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases:
gnu.org has address 209.51.188.116
gnu.org has IPv6 address 2001:470:142:5::116
gnu.org mail is handled by 10 eggs.gnu.org.
Content of /rw/config/rc.local in sys-wall:
#!/bin/sh
# This script will be executed at every VM startup, you can place your own custom commands here.
# This includes overriding some configuration in /etc, starting services, etc.
nft='usr/sbin/nft'
# redirect all dns-requests to sys-dns
"${nft}" flush chain ip qubes dnat-dns
"${nft}" add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to ${/usr/bin/qubesdb-read /qubes-gateway}
# Block connections to other DNS servers
"${nft}" add rule ip qubes dnat-dns meta l4proto { tcp, udp } th dport 53 drop
I copied this content from your suggestion you gave long time ago:
@DVM, It is not acceptable to report incorrectly. I deeply apologize for this—it is not good at all to take your time. Thank you for your patience. Upon reviewing the screenshots, I confirm that the dnat-dns chain is indeed present in the nft list table ip qubes output for sys-wall. Here’s the corrected information:
sudo nft list table ip qubes Output for sys-wall:
content
table ip qubes {
set downstream {
type ipv4_addr
}
set allowed {
type ifname . ipv4_addr
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoor
ip saddr @downstream counter packets 0 bytes 0 drop
}
chain antispoor {
iifname . ip saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 meta l4proto icmp accept
iif "lo" accept
iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
chain chat-dns {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to 10.138.31.106
meta l4proto { tcp, udp } th dport 53 drop
}
}
I verified today host gnu.org in sys-wall, still times out, whereas in sys-dns it works fine.
Just to make sure it is coming from sys-wall, are you able to resolve domains with nslookup gnu.org $(qubesdb-read /qubes-gateway) using an unmodified app qube connected directly to sys-dns?
I tested nslookup gnu.org $(qubesdb-read /qubes-gateway) in an unmodified AppVM connected directly to sys-dns, and it timed out:
[user@u1 ~]$ nslookup gnu.org $(qubesdb-read /qubes-gateway)
:: communications error to 10.138.31.106#53: timed out
:: communications error to 10.138.31.106#53: timed out
:: communications error to 10.138.31.106#53: timed out
:: no servers could be reached
Does this indicate that the issue is not specific to sys-wall but affects any VM connected to sys-dns? Let me know if you need further details! Thanks
[user@u1 ~]$ ping -c 4 $(qubesdb-read /qubes-gateway)
PING 10.138.31.106 (10.138.31.106) 56(84) bytes of data.
64 bytes from 10.138.31.106: icmp_seq=1 ttl=64 time=9.37 ms
64 bytes from 10.138.31.106: icmp_seq=2 ttl=64 time=0.849 ms
64 bytes from 10.138.31.106: icmp_seq=3 ttl=64 time=0.800 ms
64 bytes from 10.138.31.106: icmp_seq=4 ttl=64 time=0.616 ms
--- 10.138.31.106 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3052ms
rtt min/avg/max/mdev = 0.616/2.909/9.371/3.731 ms
DNS resolution attempt:
[user@u1 ~]$ host gnu.org
;; communications error to 10.139.1.1#53: timed out
;; communications error to 10.139.1.1#53: timed out
;; communications error to 10.139.1.2#53: timed out
;; no servers could be reached
DNS requests should be redirected by the nftables rule once they reach sys-dns, so there’s probably a problem there.
Going back to your previous sys-dns nftables rules, I see that for some reason the chain is called chat-dns instead of dnat-dns and the rule contains chat instead of dnat, so I’m not sure of the integrity of it. Is it possible for you to redo this?
@DVM, earlier errors were due to an LLM parsing screenshots for command outputs. For clarity, here’s a screenshot of nft list ruleset from sys-dns. Let me know if you need further details.
Thanks
There’s a restriction here that prevents me from posting for two hours (weird). Additionally, the “only one msg” restriction prevented further communication with DVM.
I’ll share updates on my DNS resolution issue in sys-dns for visibility.
Steps Taken:
Edited /rw/config/rc.local in f41-m-dns-dvm using nano and added the following command at the end of the existing nft lines (as suggested by DVM):
"${nft}" add rule ip qubes dnat-dns meta l4proto { tcp, udp } th dport 53 counter dnat to 127.0.0.1
Now my rc.local looks like this:
/rw/config/rc.local in sys-dns
#!/bin/sh
# This script will be executed at every VM startup, you can place your own
# custom commands here. This includes overriding some configuration in /etc,
# starting services etc.
nft='/usr/sbin/nft'
# allow redirects to localhost
/usr/sbin/sysctl -w net.ipv4.conf.all.route_localnet=1
"${nft}" add rule ip qubes custom-input meta l4proto { tcp, udp } iifgroup 2 ip daddr 127.0.0.1 th dport 53 accept
# block connections to other DNS servers
"${nft}" add rule ip qubes custom-forward meta l4proto { tcp, udp } iifgroup 2 ip daddr != 127.0.0.1 th dport 53 drop
"${nft}" flush chain ip qubes dnat-dns
"${nft}" add rule ip qubes dnat-dns meta l4proto { tcp, udp } th dport 53 dnat to 127.0.0.1
"${nft}" add rule ip qubes dnat-dns meta l4proto { tcp, udp } th dport 53 counter dnat to 127.0.0.1
echo 'nameserver 127.0.0.1' > /etc/resolv.conf
# https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Installation-linux
# https://wiki.archlinux.org/title/Dnscrypt-proxy#Enable_EDNS0
echo 'options edns0' >> /etc/resolv.conf
ln -s /rw/dnscrypt-proxy /etc/dnscrypt-proxy
/usr/bin/systemctl start dnscrypt-proxy.service
Shutdown f41-m-dns-dvm.
Restarted sys-dns.
Ran tcpdump in sys-dns while testing with host gnu.org and ping in an AppVM.
Ran nft list table ip qubes and nft list ruleset in sys-dns.
`nft list table ip qubes` output
table ip qubes {
set downstream {
type ipv4_addr
elements = { 10.137.0.18, 10.138.10.105 }
}
set allowed {
type ifname . ipv4_addr
elements = { "vif19.0" . 10.137.0.18,
"vif26.0" . 10.138.10.105 }
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip saddr @downstream counter packets 0 bytes 0 drop
}
chain antispoof {
iifname . ip saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 meta l4proto icmp accept
iif "lo" accept
iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
meta l4proto { tcp, udp } iifgroup 2 ip daddr 127.0.0.1 th dport 53 accept
}
chain custom-forward {
meta l4proto { tcp, udp } iifgroup 2 ip daddr != 127.0.0.1 th dport 53 drop
}
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto { tcp, udp } th dport 53 dnat to 127.0.0.1
meta l4proto { tcp, udp } th dport 53 counter packets 0 bytes 0 dnat to 127.0.0.1
}
}
`nft list ruleset` output
table ip qubes {
set downstream {
type ipv4_addr
elements = { 10.137.0.18, 10.138.10.105 }
}
set allowed {
type ifname . ipv4_addr
elements = { "vif19.0" . 10.137.0.18,
"vif26.0" . 10.138.10.105 }
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip saddr @downstream counter packets 0 bytes 0 drop
}
chain antispoof {
iifname . ip saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 meta l4proto icmp accept
iif "lo" accept
iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
meta l4proto { tcp, udp } iifgroup 2 ip daddr 127.0.0.1 th dport 53 accept
}
chain custom-forward {
meta l4proto { tcp, udp } iifgroup 2 ip daddr != 127.0.0.1 th dport 53 drop
}
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto { tcp, udp } th dport 53 dnat to 127.0.0.1
meta l4proto { tcp, udp } th dport 53 counter packets 0 bytes 0 dnat to 127.0.0.1
}
}
table ip6 qubes {
set downstream {
type ipv6_addr
}
set allowed {
type ifname . ipv6_addr
}
chain antispoof {
iifname . ip6 saddr @allowed accept
counter packets 26 bytes 1668 drop
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip6 saddr @downstream counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain _icmpv6 {
meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
accept
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 goto _icmpv6
iif "lo" accept
ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
meta l4proto ipv6-icmp accept
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
}
table ip qubes-firewall {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != "vif*" accept
ip saddr 10.137.0.18 jump qbs-10-137-0-18
ip saddr 10.138.10.105 jump qbs-10-138-10-105
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifname != "vif*" ip saddr { 10.137.0.18, 10.138.10.105 } drop
}
chain postrouting {
type filter hook postrouting priority raw; policy accept;
oifname != "vif*" ip daddr { 10.137.0.18, 10.138.10.105 } drop
}
chain qbs-10-137-0-18 {
accept
reject with icmp admin-prohibited
}
chain qbs-10-138-10-105 {
accept
reject with icmp admin-prohibited
}
}
table ip6 qubes-firewall {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != "vif*" accept
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
}
chain postrouting {
type filter hook postrouting priority raw; policy accept;
}
}
table inet qubes-nat-accel {
flowtable qubes-accel {
hook ingress priority filter
devices = { eth0, lo, vif19.0, vif26.0 }
}
chain qubes-accel {
type filter hook forward priority filter + 5; policy accept;
meta l4proto { tcp, udp } iifgroup 2 oifgroup 1 flow add @qubes-accel
counter packets 0 bytes 0
}
}