Configuring a ProxyVM VPN Gateway

You can enable logging in sys-vpn firewall and check the logs:
https://wiki.nftables.org/wiki-nftables/index.php/Logging_traffic
Or enable rule counters to see how packets are going through sys-vpn.
Or use tcpdump to check the traffic.

Does it work with proto tcp?
Did you check that the IP to make sure that VPN is working?

Yes, if I using sys-firewall as NetVM for sys-vpn I opened a website for checking my ip and it was the vpn ip. That is working, But as soon as I give sys-whonix to sys-vpn as NetVM I canā€™t open a website anymore, itā€™s loading and loading.
And I have changed the line to ā€œproto tcpā€ but itā€™s still not working.

Did you try to restart sys-vpn after changing its net qube to sys-whonix?

Ok, after a reset I can open a website, the ip check tells me, it is the vpn ip. Thanks, thought I can change NetVMs on the fly via Qubes Manager.
But I am not sure if the sys-vpn ā€œget itā€™s netā€ from sys-whonix, if I type ā€œcurl https://ip.meā€ in the sys-vpn terminal I got ā€œCould not resolve host: ip.meā€.

Run this command:

sudo sg qvpn -c "curl https://ip.me"

Inside the sys-vpn terminal right? Then I get the same answer.

Yes.

Try this command:

sudo sg qvpn -c "curl https://1.1.1.1"

or

sudo sg qvpn -c "curl https://9.9.9.9"

At the first command, after one or two seconds I just got an empty input line without giving me informations, after the second command before the empty input line it says ā€œnot foundā€.

Then network is working, itā€™s just an issue with DNS resolution in sys-vpn.
Whatā€™s in your /etc/resolve.conf in sys-vpn?

In the resolve.conf is

nameserver 10.139.1.1
nameserver 10.139.1.2

I donā€™t know why DNS resolution is not working for you.
You can use firewall counters/logs or tcpdump in sys-net and sys-whonix to check the how the DNS packets are handled.

Hm, ok, thanks for your help, but that getting to hard for me respectively I donā€™t want to spend more and more time on that. I am happy, that it worked with normal vpn and if I need Tor, I use it only.
Thanks again.

Hey guys, I have a similar issue.
My setup looks like this:
Tor ā†’ VPN1 ā†’ VPN2 ā†’ Clearnet

I am well aware of dangers of using VPN this way, Iā€™ve been reading the documentation for two weeks non stop. This is a specific need, and VPN2 is disposable, while VPN1 is also disposable, just lives a little bit longer. They are just for few VMs, while the majority of my work happens using just sys-whonix.

Now my issue.
I have Qubes v4.2 and I have set it up using iptables and CLI scripts with the adjustments etaz provided, the later ones in the comment too.
In my sys-whonix I have disabled the transparent gateway and I am strictly using stream isolation.

For VPN1, in ovpn file I have set socks-proxy to my sys-whonix and its working fine, while VPN2 does not have it.

For both of them resolv.conf has the virtual DNS. 10.139.x.x
When I am echoing the DNS servers qubes-vpn-handler sets the vpn_dns to - it shows something like 192.168.x.x . My understanding is that it is pulling VPN nameservers.

The conenctivity is fine, just the DNS is broken similar to previous comments. I have to manually change resolv.conf in VPN2 to some public DNS server (lets say 8.8.8.8), and do the same on the VM that connects to it to get things working. (the machine is win10 standalone VM).

I understand that thanks to sys-whonix I am still protected, but every time I do that I have a very uncomfortable feeling.

I need help to properly set this up so I can be sure it works as expected.
My understanding was that I could chain these qubes without issues, but it does not seem to be the case.
Hereā€™s my files. Can anyone help me please get this right and understand the solution? I suspect something needs to be adjusted in firewall rules, but I am not very well versed here to make changes.

qubes-firewall-user-script:

#!/bin/bash
#    Block forwarding of connections through upstream network device
#    (in case the vpn tunnel breaks):
# Prevent the qube to forward traffic outside of the VPN
nft insert rule qubes custom-forward oifname eth0 counter drop
nft insert rule ip6 qubes custom-forward oifname eth0 counter drop
nft insert rule qubes custom-forward iifname eth0 counter drop
nft insert rule ip6 qubes custom-forward iifname eth0 counter drop


#   Block output hook 
#nft 'add chain qubes output { type filter hook output priority 0; policy drop; }'
#    Accept traffic to VPN
nft 'add chain qubes output { type filter hook output priority 0; policy accept; }'
#iptables -P OUTPUT ACCEPT
#iptables -F OUTPUT

#    Add the `qvpn` group to system, if it doesn't already exist
if ! grep -q "^qvpn:" /etc/group ; then
     groupadd -rf qvpn
     sync
fi
sleep 2s

#    Block non-VPN traffic to clearnet
nft insert rule ip qubes output oifname eth0 counter drop
#iptables -I OUTPUT -o eth0 -j DROP


#    Allow traffic from the `qvpn` group to the uplink interface (eth0);
#    Our VPN client will run with group `qvpn`.
nft insert rule ip qubes output oifname eth0 skgid qvpn accept
#iptables -I OUTPUT -p all -o eth0 -m owner --gid-owner qvpn -j ACCEPT

qubes-vpn-handler.sh

#!/bin/bash
set -e
export PATH="$PATH:/usr/sbin:/sbin"

case "$1" in

up)
# To override DHCP DNS, assign DNS addresses to 'vpn_dns' env variable before calling this script;
# Format is 'X.X.X.X  Y.Y.Y.Y [...]'
if [[ -z "$vpn_dns" ]] ; then
    # Parses DHCP foreign_option_* vars to automatically set DNS address translation:
    for optionname in ${!foreign_option_*} ; do
        option="${!optionname}"
        unset fops; fops=($option)
        if [ ${fops[1]} == "DNS" ] ; then vpn_dns="$vpn_dns ${fops[2]}" ; fi
    done
fi


nft flush chain ip qubes dnat-dns
#nft add chain qubes nat { type nat hook prerouting priority dstnat\; }
#iptables -t nat -F PR-QBS
if [[ -n "$vpn_dns" ]] ; then
    # Set DNS address translation in firewall:
    for addr in $vpn_dns; do
        nft add rule qubes dnat-dns iifname == "vif*" tcp dport 53 dnat "$addr"
        nft add rule qubes dnat-dns iifname == "vif*" udp dport 53 dnat "$addr"
        #iptables -t nat -A PR-QBS -i vif+ -p udp --dport 53 -j DNAT --to $addr
        #iptables -t nat -A PR-QBS -i vif+ -p tcp --dport 53 -j DNAT --to $addr
    done
#    su - -c 'notify-send "$(hostname): LINK IS UP." --icon=network-idle' user
fi

;;
down)
#su - -c 'notify-send "$(hostname): LINK IS DOWN !" --icon=dialog-error' user

# Restart the VPN automatically
#sleep 5s
#sudo /rw/config/rc.local
;;
esac

Is 192.168.x.x DNS server accessible?
Maybe itā€™s just not working, then you can override it in the qubes-vpn-handler.sh:

Yes, 192.168.x.x is accessible, and if I set the nameserver manually in resolv.conf - it works.
The same goes for public dns.

I have modified the script by adding echo ā€œ$vpn_dnsā€, echo ā€œ$addrā€; echo ā€œ$optionā€ in respective places to see what values they get.
When I pass dhcp-options in ovpn, qubes-vpn-handler correctly recognizes both the nameservers I set and the ones VPN server sets.
lets say I have in my ovpn file:

dhcp-options 8.8.8.8
dhcp-options 8.8.4.4 

The output looks like this (these are the custom echo lines I have added to qubes-vpn-handler to see values):

Link is UP;
Foreign option: dhcp-option DNS 8.8.8.8
Setting Nameserver:  8.8.8.8
Foreign option: dhcp-option DNS 8.8.4.4
Setting Nameserver:  8.8.8.8 8.8.4.4
Foreign option: dhcp-option DNS 192.168.x.x
Setting Nameserver:  8.8.8.8 8.8.4.4 192.168.x.x
Address: 8.8.8.8
Address: 8.8.4.4
Address: 192.168.x.x
2024-08-30 08:04:43 Initialization Sequence Complete

/etc/resolv.conf still has the virtual nameservers that donā€™t work:

nameserver 10.139.1.1
nameserver 10.139.1.2

Hereā€™s the nft ruleset after the initialization is complete:

$ sudo nft list ruleset
table ip qubes {
	set downstream {
		type ipv4_addr
	}

	set allowed {
		type ifname . ipv4_addr
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
		iifname "eth0" counter packets 0 bytes 0 drop
		oifname "eth0" counter packets 0 bytes 0 drop
	}

	chain output {
		type filter hook output priority filter; policy accept;
		oifname "eth0" meta skgid 993 accept
		oifname "eth0" counter packets 145 bytes 9046 drop
	}

	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		iifname "vif*" tcp dport 53 dnat to 8.8.8.8
		iifname "vif*" udp dport 53 dnat to 8.8.8.8
		iifname "vif*" tcp dport 53 dnat to 8.8.4.4
		iifname "vif*" udp dport 53 dnat to 8.8.4.4
		iifname "vif*" tcp dport 53 dnat to 192.168.x.x
		iifname "vif*" udp dport 53 dnat to 192.168.x.x
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
		iifname "eth0" counter packets 0 bytes 0 drop
		oifname "eth0" counter packets 0 bytes 0 drop
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifname != "vif*" ip saddr 10.137.0.xx drop
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
		oifname != "vif*" ip daddr 10.137.0.xx drop
	}
}
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}
}

Overriding vpn_dns does not work as well, I have tried to pass values or modify the script directly. It is possible that I have messed up and set wrong firewall rules when compiling the info in this topic, but I cannot spot the problem.

Are you checking if DNS work in the VPN qube itself?
Check it in the qubes connected to the VPN qube.
If you donā€™t plan to run anything in VPN qube itself that will require DNS resolution then you donā€™t need to update /etc/resolve.conf. The DNS requests coming from the qubes connected to the VPN qube will be redirected to the correct DNS servers using firewall rules in dnat-dns chain.
If you want for openvpn to update /etc/resolve.conf then you need to install resolvconf and use update-resolv-conf up/down script:

Thank you very much!
Yes, I have been checking the DNS in the VPN Qube, and indeed it works in the connected AppQubes without manually changing the nameservers!

Thanks for clarifying!