Wireguard VPN setup

Just curious, in Qubes case, what are the advantages of using Wireguard instead of OpenVPN?

You can automate this with this one-liner:

for con in $(nmcli -g name,type con | grep :wireguard$ | cut -d: -f1); do nmcli con modify "$con" connection.autoconnect no; done
1 Like

WireGuard is faster (more efficient and can sustain higher speed) than OpenVPN and is stateless.

The stateless feature is important for some users, this implies there are no “connected” tunnel state, data pass through the VPN but if you switch from wifi to 4G for instance, there is no need to stop / restart the service because the remote endpoint doesn’t keep track of the previous session with the previous IP. In case of suspend / resume, there is no such thing as reconnecting too.

1 Like

Cherry on top would be a watcher script that disallows more than 1 Wireguard connection to be active at a time - if you click a WG connection in the applet, it would disconnect all older ones. This can probably be achieved by parsing the TIMESTAMP field on nmcli -f all con show. I’ll try to do this later

Done!

#!/usr/bin/bash

set -euo pipefail

while :; do
  active=$(nmcli -g timestamp,name,type,active connection | grep :wireguard:yes$ || :)
  if [[ $active ]]; then
    active_count=$(wc -l <<< $active)
    if [[ $active_count -gt 1 ]]; then
      active_tokill=$(sort -fV <<< $active | head -n -1 | cut -d : -f 2)
      while IFS= read -r con; do
        nmcli connection down "$con" || :
      done <<< $active_tokill
    fi
  fi
sleep 1
done
2 Likes

Thank you for this awesome guide @solene . I used it successfully on Qubes 4.2 with Fedora 39. I have some suggestions based on my experience with Qubes 4.2:

Regarding Prevent DNS leak:

As mention in post Wireguard VPN setup - #63 by apparatus the ip qubes table already has a chain called dnat-dns.

As you mentioned in post Wireguard VPN setup - #89 by solene there is a Qubes OS helper script. It is located at /usr/lib/qubes/qubes-setup-dnat-to-ns and is called by /etc/NetworkManager/dispatcher.d/qubes-nmhook. It looks for DNS configurations and updates the dnat-dns chain automatically.

I found that by adding the primary and secondary Quad9 DNS servers to the wireguard config file DNS line (comma-separated), they would be added to the dnat-dns chain correctly after restarting the connection. They can also be added to the GUI manually after the config file was imported.

The resulting chain looked like this:

$ sudo nft list chain ip qubes dnat-dns
table ip qubes {
	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		ip daddr 10.139.1.1 udp dport 53 dnat to 9.9.9.9
		ip daddr 10.139.1.1 tcp dport 53 dnat to 9.9.9.9
		ip daddr 10.139.1.2 udp dport 53 dnat to 149.112.112.112
		ip daddr 10.139.1.2 tcp dport 53 dnat to 149.112.112.112
	}
}

I recommend configuring both the primary and secondary DNS servers so that both the 10.139.1.1 and 10.139.1.2 IPs are NATed.

I am concerned that creating a new nat chain of type “nat hook prerouting priority dstnat” might conflict with the existing dnat-dns chain in the sense that the ordering will be undefined as explained in this topic: https://unix.stackexchange.com/questions/607358/packet-processing-order-in-nftables .

Therefore my suggestion is to specify the DNS servers in the wireguard configuration file or GUI instead of creating a new chain. Alternatively the existing dnat-dns chain could probably be updated manually, but the Qubes OS helper script might clobber manual changes if DNS servers were configured in the GUI later.

I also noticed that there is no dnat-dns chain for IPv6, which is explained here: https://www.qubes-os.org/doc/networking/#limitations

Currently only IPv4 DNS servers are configured, regardless of ipv6 feature state. It is done this way to avoid reconfiguring all connected qubes whenever IPv6 DNS becomes available or not. Configuring qubes to always use IPv6 DNS and only fallback to IPv4 may result in relatively long timeouts and poor usability. But note that DNS using IPv4 does not prevent to return IPv6 addresses. In practice this is only a problem for IPv6-only networks.

1 Like

Regarding the qvm-firewall commands, line (2) could be changed as follows to restrict the rule further:

qvm-firewall sys-vpn add proto=udp dsthost=1.2.3.4 dstports=51820 accept

The port might be different depending on the VPN provider.

Regarding disposable sys-vpn:

If you wanted sys-vpn to be a disposable VM, you could go through the configuration steps once, then copy the resulting file that can be found under /etc/NetworkManager/system-connections/ to another VM.

Then create a disposable template (you can call it sys-vpn-dvm) as explained here: https://www.qubes-os.org/doc/disposable-customization/ .

Then copy the *.nmconnection file you saved earlier to /rw/config/NM-system-connections/. You can also configure /rw/config/qubes-firewall-user-script in this disposable template, but the qvm-firewall commands should be run on the disposable VM itself (it is possible for the disposable template to have no networking configured).

Then create a new disposable VM called sys-vpn based on sys-vpn-dvm. The result is that you get a clean slate every time you restart sys-vpn, just like with disposable sys-usb, sys-firewall etc. You could apply this method to a disposable sys-net too BTW, saving you the hassle of configuring a connection each time sys-net is restarted.

The MTU rule could be added for the ip6 table as well since that is done for the killswitch configurations.

First, solene thank you for posting this setup. Works equally well on Debian & Fedora on R4.2.1

Second, fyi, ProtonVPN recommends using their DNS instead of another-.*

1 Like

This is an interesting hint, thanks for mentioning. At least if I configure VPN via NetworkManager, NAT rules are automatically prepended to dnat-dns chain:

table ip qubes {
        chain dnat-dns {
                type nat hook prerouting priority dstnat; policy accept;
                ip daddr 10.139.1.1 udp dport 53 dnat to <my-vpn-dns-gateway>
                ip daddr 10.139.1.1 tcp dport 53 dnat to <my-vpn-dns-gateway>
                ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.1
                ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.1
        }
}

Above config should be fine , as top rules get higher precedence (nft processes rules top-down, first wins).

How I understand it: If you block DNS with Qubes firewall rules, 10.139.1.1 and 10.139.1.2 cannot be used to forward DNS traffic withing Qubes virtual network anymore, hence no DNS leaks possible. In theory (and if configured in different ways than shown in this guide), there might be an attempt to contact 10.139.1.[1,2] through VPN network, which obviously fails.

Before modifying VPN config files, I probably just would flush the chain in qubes-firewall script beforhand: nft flush chain ip qubes dnat-dns. But this shouldn’t be needed with Qubes firewall + NetworkManager, see also above hint.

Does this guide also work with openvpn under 4.2?

No, this guide is meant for WireGuard, not OpenVPN.

Although, most steps should be pretty similar except the import step. On fedora, you may have issues with selinux when adding a certificate though, I did not have time to look for a proper fix, especially since I’m not using OpeNVPN.

Nope. From my experience OpenVPN does not autoconnectby default and even if you click on autoconnect, Qubes interferes with that setting and resets it on reboot. So you have to script the autoconnect which makes it much more complex to actually get working…

weird, it works fine with wireguard :thinking:

it may be that network manager tries to connect to openvpn before network manager got network, and it fails. This is not an issue with wireguard as it’s stateless, so this may explain why it’s specific to openvpn :woman_shrugging: just a wild guess

Nope. At least last time Qubes’ networking setup overrides the networkmanager’s config directory, where the autoconnect setting was set. Somehow wireguard does not use that, which makes this setup just magic :magic_wand:

I tried testing the killswitch, so I stopped networkmanager inside the sys-vpn, but I still had somehow connection to the sys-vpn and my ip didn’t change. How does that even work? And how can I test the killswitch? Could you also make a NFT rule to block ICMP/ping to add into the qubes-firewall-user-script? Also what is the difference between this guide and this one [Tutorial 4.2&4.1] Mullvad Wireguard with Qubes - #53 by qubes-neurotic ?

Great guide. A few qustions:

Why wouldn’t one just use ProtonVPN GUI in each individual AppQube? This is what I’m currently doing and it works fine. It allows me to set up separate VPN connection for each Qube pretty simply. That way they don’t use all the same IP, and seems better for privacy purposes.

Am I correct in assuming that if one were to use a sys-vpn, every Qube connected to it would be passing traffic through the same VPN connection and thus have the same IP if they’re running at the same time? And the only way to counteract this would be to set up a separate sys-vpn for each Qube?

What are the drawbacks of my current set up? I don’t see the benefit to setting up a separate sys-vpn Qube.

yes

security and ability to reuse a VPN for multiple qubes, one may quickly reach device limit

yes

could you explain what you did exactly to try it?

I ran sudo systemctl stop NetworkManager.service inside sys-vpn and than pinged a website in another AppVM that had the NetVM of sys-vpn and it still had network access with the same same ip as the sys-vpn provided.