What’s in your dnat-dns
chain inside your VPN qube when DNS is not working?
sudo nft list chain ip qubes dnat-dns
What’s in your dnat-dns
chain inside your VPN qube when DNS is not working?
sudo nft list chain ip qubes dnat-dns
[user@mullvad-vpn ~]$ sudo nft list chain ip qubes dnat-dns
table ip qubes {
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
}
}
Same result with the VPN connected or disconnected, as well as with sys-firewall
or sys-whonix
as the net qube.
What happens if you run both of these commands while connected to the VPN and test again?
sudo nft flush chain ip qubes dnat-dns
sudo nft add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to $(head -1 /etc/resolv.conf | awk '{print $2}')
I’d start with
cat /etc/resolv.conf
Just in case, those entries were generated by NetworkManager … otherwise you’d be nat’ing to “Generated”.
Seems like that resolved the issue for good, thanks for the quick help.
[user@mullvad-vpn ~]$ sudo nft flush chain ip qubes dnat-dns
[user@mullvad-vpn ~]$ sudo nft add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to $(head -1 /etc/resolv.conf | awk '{print $2}')
[user@mullvad-vpn ~]$ sudo nft list chain ip qubes dnat-dns
table ip qubes {
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to 10.64.0.1
}
}
[user@disp5174 ~]$ ping -c 2 qubes-os.org
PING qubes-os.org (104.21.64.1) 56(84) bytes of data.
[...]
--- qubes-os.org ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
Now how would I make this persistent? I tried replacing /etc/systemd/system/dnat-to-ns.service
to, instead of running /usr/lib/qubes/qubes-setup-dnat-to-ns
everytime /etc/resolv.conf
is changed, running my own script with the commands you provided, but the dnat-dns chain either doesn’t get updated or gets overridden.
Seperately, what’s the solve here? Redirecting DNS packets from the Qubes DNS to Mullvad’s DNS? If so, why’s the default Qubes DNS unviable when using sys-whonix
?
One option would be to disable the systemd service you created from the previously linked guide and use the script (from the guide we are currently posting in) from the “Fix DNS” section.
The Mullvad app only allows one specific DNS address to be used, in this case it’s the one specified in /etc/resolv.conf
while the VPN is connected. This means that both 10.139.1.1
and 10.139.1.2
are blocked by the firewall rules, which also prevents leaks. The fix here is to redirect all incoming DNS requests to the Mullvad internal DNS address instead of using the Qubes ones, which are only used to go down the chain until it hits a real DNS server that can resolve domains (most of the time it’s sys-net).
Hi. thanks for the great guide, but i have a question. i use sys-whonix before connection, then i use sys-vpn installed according to your guide step by step. Everything was fine for 5 months, but lately mullvad is reconnecting frequently (every 2 minutes) which makes programs crash and can’t work. please advise me what i should change to make it work on my side.
I assume you use open VPN in TCP mode, right?
Do you have some connection logs when it disconnects and reconnects?
Your account is not breached by any chance?
/rw/config/rc.local doesn’t work. Is it because I’m using Debian?
I still cant resolve DNS queries, despite using the fix. I can ping IPs and I am able to connect to the vpn without a problem, but I am unable to reach any website.
Is there anything else I have to consider?
Edit:
Pinging 9.9.9.9 and quad9.net works perfectly fine within the mullvad-vpn-qube, but not in the working-qube
Can you share the output of the following command in the VPN qube?
sudo nft list chain ip qubes dnat-dns
If you made any changes to the app, could you please list them?
table ip qubes {
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
}
}
I enabled multihop, Autolaunch and autoconnect and lockdown mode
Once the VPN is connected, the chain’s content should change to the Mullvad DNS IP address. That isn’t happening here. You are either not connected or the script is not running to update the rules. Make sure you have created and made the script executable and, most importantly, that you have inotify-tools
installed. Otherwise, the script won’t work.
@DVM I did in fact not install the inotify packages. It works now.
Thank you for your answers.
@solene Thank you also again for your help, not only with your guide, but you also did reply to me in another thread a few days ago.
You know, you said give 800MB of minimum memory, and I guess my brain thought that meant max memory should be 8000, but that doesn’t quite make sense. I’m running only a VPN in this Standalone.
Two questions:
qubes-core-agent-networking
, OpenVPN, and all the requisite NetVM packages listed on Qube’s official documentation (.onion)? (clearnet)Thanks for your help!
This is basically what I did for my fedora-41-minimal; essentially the same packages. So far, no issues.
Although, I did have to briefly install Mousepad to make copying and pasting the script easier (as pasting in xterm isn’t possible). Once I finished, I deleted Mousepad, as I’m going for as-minimal-as-possible.
Also, I’m running this on 1 vCPU, so we’ll see how that goes going forward.
I’ll report on the state of things soon
I had trouble doing this again on a fresh qubes install. Seemed to work perfectly for fedora-39 (41 right now).
Connection was working fine on the service itself when I used firefox, but trying to use it on a new qube routed through the Mullvad Service it just wouldn’t load - followed the instructions to a T.
Instead, I figured if it works on the template/service instead I just installed it to the fedora-41 template, which works great.
Are there any downsides to this? As far as I can see, it’s quite nice as I can seperate mullvad accounts if I want to, and obviously choose to use Mullvad or not on any Qube I create with a fedora template.
Thanks
is there a way to permanently set dns to mullvads servers?
as i assume most people never plan on disconnecting