Mullvad VPN App 4.2 setup guide

ok the last line on this log happened after the ping on the work qube

[user@sys-vpn ~]$ sudo journalctl -f
Apr 08 17:05:26 sys-vpn sudo[1564]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/usr/bin/journalctl -f
Apr 08 17:05:26 sys-vpn audit[1564]: CRED_REFR pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct=“root” exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/0 res=success’
Apr 08 17:05:26 sys-vpn kernel: audit: type=1101 audit(1712559926.528:163): pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘op=PAM:accounting grantors=pam_unix,pam_localuser acct=“user” exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/0 res=success’
Apr 08 17:05:26 sys-vpn kernel: audit: type=1123 audit(1712559926.528:164): pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘cwd=“/home/user” cmd=6A6F75726E616C63746C202D66 exe=“/usr/bin/sudo” terminal=pts/0 res=success’
Apr 08 17:05:26 sys-vpn kernel: audit: type=1110 audit(1712559926.529:165): pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct=“root” exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/0 res=success’
Apr 08 17:05:26 sys-vpn sudo[1564]: pam_unix(sudo:session): session opened for user root(uid=0) by user(uid=1000)
Apr 08 17:05:26 sys-vpn audit[1564]: USER_START pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=“root” exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/0 res=success’
Apr 08 17:05:26 sys-vpn kernel: audit: type=1105 audit(1712559926.532:166): pid=1564 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=“root” exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/0 res=success’
Apr 08 17:05:26 sys-vpn audit[1565]: USER_ROLE_CHANGE pid=1565 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘newrole: old-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 new-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/1 res=success’
Apr 08 17:05:26 sys-vpn kernel: audit: type=2300 audit(1712559926.533:167): pid=1565 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=‘newrole: old-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 new-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe=“/usr/bin/sudo” hostname=? addr=? terminal=/dev/pts/1 res=success’
Apr 08 17:09:06 sys-vpn systemd[763]: Started dbus-:1.3-org.xfce.Xfconf@1.service.

Seems like nothing is coming from your work qube to sys-vpn qube.
Just to make sure, are you sure that your work qube has sys-vpn as Net qube in its Qube Settings?

Try to ping 9.9.9.9 from your work qube while looking at the firewall logs with journalctl.

yup its definitely connected to sys-vpn, pinging 9.9.9.9 works fine but the ping quad9.net does not work. here are the log results from the ping 9.9.9.9

Apr 08 18:19:40 sys-vpn kernel: IN=vif12.0 OUT=wg0-mullvad MAC=fe:ff:ff:ff:ff:ff:00:16:3e:5e:6c:00:08:00 SRC=10.137.0.10 DST=9.9.9.9 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=12598 DF PROTO=ICMP TYPE=8 CODE=0 ID=1 SEQ=33
Apr 08 18:19:40 sys-vpn kernel: IN=wg0-mullvad OUT=vif12.0 MAC= SRC=9.9.9.9 DST=10.137.0.10 LEN=84 TOS=0x00 PREC=0x00 TTL=58 ID=10296 PROTO=ICMP TYPE=0 CODE=0 ID=1 SEQ=33

Run these commands in sys-vpn:

sudo nft add chain ip qubes log-chain {type filter hook prerouting priority -450\;}
sudo nft insert rule ip qubes log-chain log

Then start viewing the log in sys-vpn:

sudo journalctl -f -n0

And try to ping from your work qube:

ping -c 1 9.9.9.9
ping -c 1 quad9.net

Apr 08 18:47:07 sys-vpn kernel: IN=vif12.0 OUT= MAC=fe:ff:ff:ff:ff:ff:00:16:3e:5e:6c:00:08:00 SRC=10.137.0.10 DST=10.139.1.2 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=59884 DF PROTO=TCP SPT=57488 DPT=53 WINDOW=32120 RES=0x00 SYN URGP=0
Apr 08 18:47:08 sys-vpn kernel: IN=vif12.0 OUT= MAC=fe:ff:ff:ff:ff:ff:00:16:3e:5e:6c:00:08:00 SRC=10.137.0.10 DST=10.139.1.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=48024 DF PROTO=TCP SPT=60364 DPT=53 WINDOW=32120 RES=0x00 SYN URGP=0
Apr 08 18:47:08 sys-vpn kernel: IN=vif12.0 OUT= MAC=fe:ff:ff:ff:ff:ff:00:16:3e:5e:6c:00:08:00 SRC=10.137.0.10 DST=10.139.1.2 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=59885 DF PROTO=TCP SPT=57488 DPT=53 WINDOW=32120 RES=0x00 SYN URGP=0
Apr 08 18:47:09 sys-vpn kernel: IN=vif12.0 OUT= MAC=fe:ff:ff:ff:ff:ff:00:16:3e:5e:6c:00:08:00 SRC=10.137.0.10 DST=10.139.1.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=48025 DF PROTO=TCP SPT=60364 DPT=53 WINDOW=32120 RES=0x00 SYN URGP=0

What’s the output of this command in sys-vpn?

sudo nft list ruleset

[user@sys-vpn ~]$ sudo nft list ruleset
table ip qubes {
set downstream {
type ipv4_addr
elements = { 10.137.0.10 }
}

set allowed {
	type ifname . ipv4_addr
	elements = { "vif12.0" . 10.137.0.10 }
}

chain prerouting {
	type filter hook prerouting priority raw; policy accept;
	iifgroup 2 goto antispoof
	ip saddr @downstream counter packets 0 bytes 0 drop
}

chain antispoof {
	iifname . ip saddr @allowed accept
	counter packets 0 bytes 0 drop
}

chain postrouting {
	type nat hook postrouting priority srcnat; policy accept;
	oifgroup 2 accept
	oif "lo" accept
	masquerade
}

chain input {
	type filter hook input priority filter; policy drop;
	jump custom-input
	ct state invalid counter packets 2 bytes 80 drop
	iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
	ct state established,related accept
	iifgroup 2 meta l4proto icmp accept
	iif "lo" accept
	iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
	counter packets 0 bytes 0
}

chain forward {
	type filter hook forward priority filter; policy accept;
	jump custom-forward
	ct state invalid counter packets 0 bytes 0 drop
	ct state established,related accept
	oifgroup 2 counter packets 0 bytes 0 drop
}

chain custom-input {
}

chain custom-forward {
	log
	tcp flags syn / syn,rst tcp option maxseg size set rt mtu
	oifname "eth0" counter packets 0 bytes 0 drop
}

chain nat {
	type nat hook prerouting priority dstnat - 1; policy accept;
	iifname "vif*" tcp dport 53 counter packets 1079 bytes 60364 dnat to 10.64.0.1
	iifname "vif*" udp dport 53 counter packets 5116 bytes 363908 dnat to 10.64.0.1
}

chain dnat-dns {
	type nat hook prerouting priority dstnat; policy accept;
	ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
	ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
	ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
	ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
}

chain log-chain {
	type filter hook prerouting priority -450; policy accept;
	log
}

}
table ip6 qubes {
set downstream {
type ipv6_addr
}

set allowed {
	type ifname . ipv6_addr
}

chain antispoof {
	iifname . ip6 saddr @allowed accept
	counter packets 25 bytes 1612 drop
}

chain prerouting {
	type filter hook prerouting priority raw; policy accept;
	iifgroup 2 goto antispoof
	ip6 saddr @downstream counter packets 0 bytes 0 drop
}

chain postrouting {
	type nat hook postrouting priority srcnat; policy accept;
	oifgroup 2 accept
	oif "lo" accept
	masquerade
}

chain _icmpv6 {
	meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
	icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
	accept
}

chain input {
	type filter hook input priority filter; policy drop;
	jump custom-input
	ct state invalid counter packets 0 bytes 0 drop
	ct state established,related accept
	iifgroup 2 goto _icmpv6
	iif "lo" accept
	ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
	meta l4proto ipv6-icmp accept
	counter packets 0 bytes 0
}

chain forward {
	type filter hook forward priority filter; policy accept;
	jump custom-forward
	ct state invalid counter packets 0 bytes 0 drop
	ct state established,related accept
	oifgroup 2 counter packets 0 bytes 0 drop
}

chain custom-input {
}

chain custom-forward {
	oifname "eth0" counter packets 0 bytes 0 drop
}

}
table ip qubes-firewall {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != “vif*” accept
ip saddr 10.137.0.10 jump qbs-10-137-0-10
}

chain prerouting {
	type filter hook prerouting priority raw; policy accept;
	iifname != "vif*" ip saddr 10.137.0.10 drop
}

chain postrouting {
	type filter hook postrouting priority raw; policy accept;
	oifname != "vif*" ip daddr 10.137.0.10 drop
}

chain qbs-10-137-0-10 {
	accept
	reject with icmp admin-prohibited
}

}
table ip6 qubes-firewall {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != “vif*” accept
}

chain prerouting {
	type filter hook prerouting priority raw; policy accept;
}

chain postrouting {
	type filter hook postrouting priority raw; policy accept;
}

}
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != “wg0-mullvad” ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 103.216.220.18 udp sport 21341 meta mark set 0x6d6f6c65
}

chain output {
	type filter hook output priority filter; policy drop;
	oif "lo" accept
	ct mark 0x00000f41 accept
	udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
	ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
	ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
	ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
	ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
	ip daddr 103.216.220.18 udp dport 21341 meta mark 0x6d6f6c65 accept
	oif "wg0-mullvad" udp dport 53 ip daddr 100.64.0.23 accept
	oif "wg0-mullvad" tcp dport 53 ip daddr 100.64.0.23 accept
	udp dport 53 reject
	tcp dport 53 reject with tcp reset
	oif "wg0-mullvad" accept
	reject
}

chain input {
	type filter hook input priority filter; policy drop;
	iif "lo" accept
	ct mark 0x00000f41 accept
	udp sport 67 udp dport 68 accept
	ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
	ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
	ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
	ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
	ip saddr 103.216.220.18 udp sport 21341 ct state established accept
	iif "wg0-mullvad" accept
}

chain forward {
	type filter hook forward priority filter; policy drop;
	ct mark 0x00000f41 accept
	udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
	udp sport 67 udp dport 68 accept
	ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
	ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
	ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
	ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
	ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
	ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
	ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
	ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
	icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
	oif "wg0-mullvad" udp dport 53 ip daddr 100.64.0.23 accept
	oif "wg0-mullvad" tcp dport 53 ip daddr 100.64.0.23 accept
	udp dport 53 reject
	tcp dport 53 reject with tcp reset
	oif "wg0-mullvad" accept
	iif "wg0-mullvad" ct state established accept
	reject
}

chain mangle {
	type route hook output priority mangle; policy accept;
	oif "wg0-mullvad" udp dport 53 ip daddr 100.64.0.23 accept
	oif "wg0-mullvad" tcp dport 53 ip daddr 100.64.0.23 accept
	meta cgroup 5087041 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}

chain nat {
	type nat hook postrouting priority srcnat; policy accept;
	oif "wg0-mullvad" ct mark 0x00000f41 drop
	oif != "lo" ct mark 0x00000f41 masquerade
}

}
table inet qubes-nat-accel {
flowtable qubes-accel {
hook ingress priority filter
devices = { eth0, lo, vif12.0 }
}

chain qubes-accel {
	type filter hook forward priority filter + 5; policy accept;
	meta l4proto { tcp, udp } iifgroup 2 oifgroup 1 flow add @qubes-accel
	counter packets 68 bytes 5712
}

}
[user@sys-vpn ~]$

Just to confirm, since I don’t think you specified it, are you using Wireguard or OpenVPN in the app?

using wireguard

Run these commands in sys-vpn and check ping -c 1 quad9.net in your work qube:

sudo nft flush chain ip qubes nat
sudo nft add rule qubes nat iifname == "vif*" tcp dport 53 dnat 100.64.0.23
sudo nft add rule qubes nat iifname == "vif*" udp dport 53 dnat 100.64.0.23

The mullvad app is creating the additional rules alongside the Qubes OS ones so they’re conflicting if not set up properly.
The mullvad app is blocking all DNS queries except for the ones coming to the 100.64.0.23.

[user@work ~]$ ping -c 1 9.9.9.9
ping -c 1 quad9.net
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=58 time=28.6 ms

— 9.9.9.9 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.641/28.641/28.641/0.000 ms
ping: quad9.net: Temporary failure in name resolution
[user@work ~]$ ping -c 1 quad9.net
PING quad9.net (216.21.3.77) 56(84) bytes of data.
64 bytes from web1.sjc.rrdns.pch.net (216.21.3.77): icmp_seq=1 ttl=52 time=178 ms

quad9.net ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 178.350/178.350/178.350/0.000 ms
[user@work ~]$

Seems like it works, so just change the DNS to 100.64.0.23 in your /rw/config/qubes-firewall-user-script in sys-vpn.

this changed something as now things are starting to work, abit laggy but browser is now starting to open

@solene The following script should be added to the main post. The DNS address will not always be 10.64.0.1, especially with OpenVPN and DNS filtering rules.

this did the job it works now, thank you for all the help. fantastic

There is a much more elegant method for IVPN App but I couldn’t get it to work with Mullvad which is weird, could you give it an eyeball?

systemctl restart systemd-resolved
/usr/lib/qubes/qubes-setup-dnat-to-ns

When resolv.conf is overwritten, restarting systemd-resolved put it in “foreign mode” (where resolv.conf is managed by something else), and the qubes helper script should propagate the new DNS to the qubes properly

congrats @apparatus et good job @darkgh05t :+1:

ok i just noticed as i change the DNS filtering options in the app, we have issues connecting again. So adding this script should keep the DNS updated correct ?

The Mullvad App use multiple ways for their DNS customization (see TALPID_DNS_MODULE):

It always use “static-file” based on what I can see.
Your method might work on fedora, but systemd-resolved is not used on the debian template, so you would have to install it manually in that case.
Since it’s easier to just follow what the application does by default, the script should work out of the box. If you want something more compact, you could just keep the inotifywait part and run /usr/lib/qubes/qubes-setup-dnat-to-ns every time the application edits the file.