Understanding VPN ProxyVM

Hello,

I need to use proprietary VPN software (GlobalProtect) to my work. In order to connect, this VPN requires some software, which is not compatible with fedora-40 (this is the base for my work qube), but is compatible with debian-12.

So my plan was to create a ProxyVM qube based on debian-12, with all the proprietary software installed there, and then just set my work qube NetVM to that ProxyVM.

I’ve managed to install all the software in my ProxyVM and connect to companys VPN. I can now browse VPN-protected URLs from inside the ProxyVM.

Unfortunately when I point my work qube NetVM to that ProxyVM, then work-qube looses all internet connectivity. It can’t access VPN protected resources as well as open-internet resources.

Note: I do have “provides network” selected for ProxyVM.

Note: internet works fine in work-qube (with NetVM = ProxyVM) before I connect to GlobalProtect VPN in the ProxyVM qube.

So it seems when I connect, GlobalProtect does something to internet configuration which blocks traffic from downstream qubes.

This is what I see after connecting to GP VPN:

user@proxy-vm:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group 1 qlen 1000
    link/ether 00:16:3e:5e:6c:00 brd ff:ff:ff:ff:ff:ff
3: gpd0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
    link/none

Any ideas what might be blocking the traffic and how to debug / fix that?

Kind regards,

Qubes 4.2
work-vm based on fedora-40
proxy-vm based on debian-12

PS. I have tried looking for an answer in other VPN-related topics, but most of them assume you are using openvpn client and contain iptables instructions. In my case there is no openvnp configuration and debian-12 does not include (by default) iptables. Not sure if it was replaced by something else. If you could maybe help me understand the relevant part of networking stack it would be great.

hi, this sounds like a DNS issue :thinking: can you try to ping an IP (not an hostname) from the qube behind the proxyvm

first, I would check if you really need that proprietary client…
As network Manager openconnect VPN module should hanlde GlobalProtect VPNs.

@solene

[user@avm-test ~]$ ping 4.4.4.4
PING 4.4.4.4 (4.4.4.4) 56(84) bytes of data.
^C
--- 4.4.4.4 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7173ms

@Zrubi
So, there was an alternative setup with openconnect, I’ve created another ProxyVM (let’s call it proxy-openconnect) and followed the setup instructions – I managed to succesfully login to my companys VPN. I can now browse VPN protected and open-internet resources from proxy-openconnect vm.
However when I tried to use it NetVM for my work-qube, the same issue happens - work-qube has no access to any resources.

Is there some additional configuration required to make VPN proxyVm based on openconnect work?

nope. this should just work.
also I asked to check the openconnect version to eliminate an unknown proprietary client - and it’s possible issues.

You might have some more general problem which might not relevant to the VPN… can you try to use your proxyvm without your VPN connected?
just to see if the proxy-vm (and the relevant firewall rules) alone are working.

Then check if the AppVM you connect has no firewall restrictions applied.

EDIT:
I just read your initial post again, and see if you have tried this…

I do not have any opeconnect based VPN to try, ony OpenVPN.
but what you would need to check: the firewall settings/rules while connected and when it’s not.

(keep in mind that any recent distribution is already using nftables insted of iptables)

you would need to check: the firewall settings/rules while connected and when it’s not.

How can I check those? From your post I assume I need to check something with nftables, but I’m not really familiar with that tool - I don’t know what I’m looking for.

Based on quick google search how to use nftables, I am comparing VPN-disconnected-proxy (work qube has connectivity to open internet) with VPN-connected-proxy (work qube has no connectivity).

sudo nft list tables returns the same for both VPN connected and VPN disconnected scenario:

table ip qubes
table ip6 qubes
table ip qubes-firewall
table ip6 qubes-firewall
table inet qubes-nat-accel

sudo nft list table ip qubes-firewall has a slight difference:

table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
		ip saddr 10.137.0.36 jump qbs-10-137-0-36
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		# VPN-connected qube has following extra line:
		iifname != "vif*" ip saddr 10.137.0.36 drop
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
		# VPN-connected qube has following extra line:
		oifname != "vif*" ip daddr 10.137.0.36 drop
	}

	chain qbs-10-137-0-36 {
		accept
		reject with icmp admin-prohibited
	}
}

Could it be that those extra lines cause the issue? If yes, how do I remove them?

EDIT: I followed nftables wiki and removed those extra lines. Retried connecting from work qube (via proxy) and it is still not working.

What’s the output of this command in VPN qube with VPN connected?

sudo nft list ruleset

@apparatus

sudo dnf list ruleset in VPN connected proxy vm:

table ip qubes {
	set downstream {
		type ipv4_addr
		elements = { 10.137.0.36 }
	}

	set allowed {
		type ifname . ipv4_addr
		elements = { "vif317.0" . 10.137.0.36 }
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 1 bytes 142 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}

	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
		ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 25 bytes 1592 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
		ip saddr 10.137.0.36 jump qbs-10-137-0-36
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}

	chain qbs-10-137-0-36 {
		accept
		reject with icmp admin-prohibited
	}
}
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}
}
table inet qubes-nat-accel {
	flowtable qubes-accel {
		hook ingress priority filter
		devices = { eth0, lo, tun0, vif317.0 }
	}

	chain qubes-accel {
		type filter hook forward priority filter + 5; policy accept;
		meta l4proto { tcp, udp } iifgroup 2 oifgroup 1 flow add @qubes-accel
		counter packets 6191 bytes 380674
	}
}

Note: the above output was after removing those two extra lines mentioned in previus post:

table ip qubes-firewall {
  chain prerouting {
    iifname != "vif*" ip saddr 10.137.0.36 drop
  }
  chain postrouting {
    oifname != "vif*" ip daddr 10.137.0.36 drop
  }
}

Not sure if I should bring those back, if removing them did not help.

This could be related, but need to check the tcpdump to know what was that invalid packet.

@apparatus I’ve tried to trigger more of these invalid counter packets by running dig google.com from work qube (while being connected to proxy vm) – my assumption was the counter will go up, if it’s indeed packets from work qube. But it didn’t. I tried also ping 4.4.4.4 and opening google.com in Firefox. The invalid counter stayed the same.

As you mentioned the normal proxyvm works fine without the VPN, right?

So your VPN client do some modification that might broke the routing and/or firewall rules.

But I have not seen any network manager related VPNs to break these things so far.
Some of them however needs you to allow to run a custom script (CSD)… Can you chek if you have that?

And/or:
Can you try to connect to an OpenVPN? - just to see if it’s really your specific VPN breaks something or any VPN would do the same.

@Zrubi @apparatus @solene Thanks everyone for trying to help.

I was reading on other VPN-related topics on this forum and found that:

I just tested it. Changeing /etc/resolv.conf in my work qube connected to proxy vm (with VPN connected):

nameserver 10.139.1.1
nameserver 10.139.1.2

to

nameserver 1.1.1.1

And suddenly both VPN-protected and open-internet resources work!

EDIT: for future reference: see answer selected as solution for a better way (without changeing the nameserver).

However, if anyone can provide an explanation why this worked, that would be much appreciated. Why does qubes set those to 10.139.1.1 and 10.139.1.2?

Are there any risks with me changing that to 1.1.1.1?

Are you able to ping 4.4.4.4 in work qube after this change?

Interestingly, I am not able to ping 4.4.4.4 after this change. But ping google.com works, as well as ping <vpn-protected-internal-service>.

EDIT: when I pinged 4.4.4.4 I actually meant to ping 8.8.8.8 (Google public DNS). I think 4.4.4.4 might not exist. Sorry for the confusion.

I guess your VPN provider is blocking the ICMP.
It’d be better to check the connection using this command then:

curl https://1.1.1.1

It’ll use Cloudflare DNS instead of DNS server provided by VPN.
And you’ll need to configure it in every qube connected to the VPN qube.
It’d be better to configure it to be updated like this in VPN qube:

or like this:

1 Like

Yes, I can’t ping 4.4.4.4 as well, it’s either down or blocking ICMP.

@apparatus I’ve run the script from Mullvad VPN App 4.2 setup guide - Fix DNS and work-qube connection works (with default qubes nameserver).

Thanks!

One issue I found is when I put this script into /rw/config/qubes-firewall-user-script I could not start the qube - since this script never returns.

Is there a way to shift this script into the background? What would be the best place to start it? I have not tested it, but I suspect moving it to rc.local will have the same effect.

EDIT: never mind, just found the answer in the original post

Hmm… I still have an issue, but this time with only one qube. Steps to reproduce:

  • start Proxy VM qube, connect to VPN in it

  • start work qube, set its NetVM to ProxyVM

  • open firefox, enter VPN-protected resource

  • (firefox hangs for a long time, 2-3 minutes) and finally displays the page

  • start disposable qube based on fedora-40 (base template, untouched)

  • set its NetVM to ProxyVM

  • open firefox, enter the same VPN-protected resource

  • page opens in under 50milliseconds

What could possibly be causing that?