VPN NetVM: No Internet in Downstream Qubes (Amnezia VPN, WireGuard-based)

Context:
I’m trying to use a dedicated NetVM (sys-amnezia-standalone) in Qubes OS 4.2 for routing traffic from AppVMs and dispVMs through Amnezia VPN (which uses WireGuard). Amnezia is installed in the standalone qube and works correctly — it connects and masks VPN protocol as expected.

Setup:

  • sys-amnezia-standalone (StandaloneVM) running Amnezia VPN client GUI.
  • VPN uses WireGuard (with masking).
  • Disposable AppVMs set sys-amnezia-standalone as their NetVM.
  • Firewall in sys-amnezia-standalone initially allowed only the VPN server IP (now relaxed to allow all outgoing).
  • VPN interface name: amn0.

Problem:

  • VPN connects successfully in sys-amnezia-standalone, and it has full internet access.
  • Any disposable or AppVM using it as NetVM has no internet access at all — even DNS doesn’t resolve.
  • I applied recommended firewall restrictions (based on amn0 interface and VPN IP) and also tested with firewall fully open (all outgoing allowed) — same result.
  • DNS to 10.139.1.1 is allowed in iptables.

What I tried:

What I suspect:

  • Either sys-amnezia-standalone is not properly forwarding traffic from client VMs over the VPN tunnel, or something about Qubes networking isolation is interfering.
  • Since the VPN client is GUI-based and auto-manages the tunnel, it’s unclear if Qubes’ routing tables or NAT is mismatched post-VPN connection.

Questions:

  1. What’s the correct way to set up a GUI VPN client (like Amnezia) in a NetVM and have other qubes use it for networking?
  2. Is additional NAT or routing setup required manually?
  3. Could Qubes firewall isolation interfere with the masked WireGuard tunnel?

Would appreciate any insights or suggestions on how to debug or fix this.

Thanks!

1 Like

Hello!

How did you manage to run amnezia vpn?
I encounter an error when trying to start the interface (awg-quick cli): Unknown device type, protocol not supported.
The amneziawg-go utility starts the interface, but the configuration files are ignored…

Thanks!

It’s a standalone, but did you use a template as a base?

If not, it’s certainly missing qubes tools to handle the network, this mean you would have to turn your sys-amnezia-standalone into a router with a firewall rule for NAT, which is automatically done with qubes os compatibles OSes running in qubes.

@red_alert Hi, thanks for the question!

I used the Amnezia VPN desktop client (the graphical GUI version) in a Fedora-based standalone VM. I installed it from their official binary installer.

After installing the required dependencies (e.g. libxcb-cursor etc.), the GUI launched fine. It successfully imported a WireGuard config with obfuscation, and the VPN connected. The virtual interface it created was named amn0, and routing/DNS appeared to work correctly inside the Amnezia qube itself.

However, I didn’t use awg-quick or amneziawg-go from the CLI directly. It seems the GUI manages the tunnel differently, possibly with custom masking logic, so your issues may be related to using the CLI without that layer.

Unfortunately, I no longer have access to that machine at the moment, and I’ve already deleted the test qube, so I won’t be able to investigate further until I can reproduce the setup again. I’ll update when I get a chance to retry.

Let me know what base system you’re using, and I’d be happy to compare setups when I can revisit it.

Yes. That’s a great point, and thank you for bringing it up.

I created the standalone qube (sys-amnezia-standalone) from a Fedora-based template, so Qubes tools should have been present. The Amnezia GUI client was installed via its official binary installer, and it connected successfully, creating a VPN interface named amn0.

The issue I ran into was that although the VPN was working inside sys-amnezia, AppVMs that used it as a NetVM had no internet access — not even DNS. I suspected the problem was with NAT or forwarding not being correctly restored after the VPN tunnel came up.

At the moment, I’ve deleted that qube and can’t test further until I have access to the machine again. Once I can reproduce the setup, I’ll try rerunning /usr/lib/qubes/qubes-setup-dnat-to-ns or applying explicit NAT rules to handle forwarding from AppVMs through the amn0 interface.

Any tips or known-good iptables/NAT setups for custom VPNs in NetVMs would be very helpful when I’m able to try again.

Thanks again!

please share the output of

nft list ruleset

in your vpn qube

@cubicpyramid hello!

I managed to set up amnezia vpn using console utilities, but just like you, I only have internet access from inside. We’re doing something wrong

Hello! Iam not OP, but my output:

table ip qubes {
	set downstream {
		type ipv4_addr
		elements = { 10.137.0.27 }
	}

	set allowed {
		type ifname . ipv4_addr
		elements = { "vif12.0" . 10.137.0.27 }
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 28 bytes 1540 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 60 bytes 32597
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 18 bytes 720 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}

	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
		ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 11 bytes 712 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
		ip saddr 10.137.0.27 jump qbs-10-137-0-27
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifname != "vif*" ip saddr 10.137.0.27 drop
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
		oifname != "vif*" ip daddr 10.137.0.27 drop
	}

	chain qbs-10-137-0-27 {
		accept
		reject with icmp admin-prohibited
	}
}
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}
}
table inet qubes-nat-accel {
	flowtable qubes-accel {
		hook ingress priority filter
		devices = { eth0, lo, vif12.0 }
	}

	chain qubes-accel {
		type filter hook forward priority filter + 5; policy accept;
		meta l4proto { tcp, udp } iifgroup 2 oifgroup 1 flow add @qubes-accel
		counter packets 1936 bytes 259137
	}
}
table ip6 wg-quick-wg0 {
	chain preraw {
		type filter hook prerouting priority raw; policy accept;
		iifname != "wg0" ip6 daddr <priv ipv6*.*.*.*> fib saddr type != local drop
	}

	chain premangle {
		type filter hook prerouting priority mangle; policy accept;
		meta l4proto udp meta mark set ct mark
	}

	chain postmangle {
		type filter hook postrouting priority mangle; policy accept;
		meta l4proto udp meta mark 0x0000ca6c ct mark set meta mark
	}
}
table ip wg-quick-wg0 {
	chain preraw {
		type filter hook prerouting priority raw; policy accept;
		iifname != "wg0" ip daddr <priv ipv4*.*.*.*> fib saddr type != local drop
	}

	chain premangle {
		type filter hook prerouting priority mangle; policy accept;
		meta l4proto udp meta mark set ct mark
	}

	chain postmangle {
		type filter hook postrouting priority mangle; policy accept;
		meta l4proto udp meta mark 0x0000ca6c ct mark set meta mark
	}
}

I have good news, I found a way to make amnezia-sys-vpn work from other qubes. In each qube where you use amnezia-sys-vpn as network provider, you need to write nameserver 1.1.1.1 in /etc/resolv.conf. Most likely this is not an ideal solution, but we need to figure out how to do it right.

Does ping 1.1.1.1 work in the setup but not ping qubes-os.org ?

In that case, it’s DNS that is not working. It’s possible to configure the VPN qube to redirect all DNS queries to an arbitrary DNS server.

Does ping 1.1.1.1 work in the setup but not ping qubes-os.org ?

Yes, that’s exactly it.

Only one thing helps - to write nameserver 1.1.1.1 in resolv.conf. The problem is that you need to do it every time you start qube.

Do you know how to make this setting permanent? Or maybe there is a better solution?

I think these issues are related:

Last reply:

Instead of manually configuring the DNS in the qube, you should alter
the nftables rules in the netvm.
By default the rules are in the nat table in the PR-QBS chain:

	chain PR-QBS {
		meta l4proto udp ip daddr 10.139.1.1 udp dport 53 counter packets 0 bytes 0 dnat to 10.139.1.1
		meta l4proto tcp ip daddr 10.139.1.1 tcp dport 53 counter packets 0 bytes 0 dnat to 10.139.1.1
		meta l4proto udp ip daddr 10.139.1.2 udp dport 53 counter packets 0 bytes 0 dnat to 10.139.1.2
		meta l4proto tcp ip daddr 10.139.1.2 tcp dport 53 counter packets 0 bytes 0 dnat to 10.139.1.2
	}

But I don’t see the PR-QBS table in the output of the nft list ruleset command, only a similar table dnat-dns

I will be glad of any help. In my country, amnezia vpn is the only way to bypass blocking.

The new chain name is dnat-dns. It changed when nftables was used instead of iptables.

Try running these commands in your VPN qube to see if DNS works in the client qubes.

sudo nft flush chain qubes dnat-dns
sudo nft add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr {10.139.1.1, 10.139.1.2} th dport 53 dnat to 1.1.1.1

If it does, put both commands in /rw/config/rc.local to make them persistent.

Also, since this is using wireguard, you should use this to address MTU issues:

sudo nft add rule ip qubes custom-forward tcp flags syn / syn,rst tcp option maxseg size set rt mtu

@DVM
It works! Thank you very much

1 Like