Provide internet from StandAlone cube

Hi,
I am a new user of this beautiful OS and in the process of studying I came across the following question. I created a StandAlone cube on fedora 38, installed the nordvpn cli VPN into it. Everything went great. After that, I wanted to provide internet from this cube to other cubes, put the appropriate check mark, but internet did not start to be provided. What could be the problem?

1 Like

Can you ping an IP address (e.g. 1.1.1.1) from one of the linked qubes or does it only not work for domains when using a browser for example?

I executed the ping command. I got the following result:

64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=121 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=105 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=103 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=125 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=114 ms
64 bytes from 1.1.1.1: icmp_seq=6 ttl=58 time=117 ms
64 bytes from 1.1.1.1: icmp_seq=7 ttl=58 time=112 ms
64 bytes from 1.1.1.1: icmp_seq=8 ttl=58 time=109 ms
64 bytes from 1.1.1.1: icmp_seq=9 ttl=58 time=105 ms
64 bytes from 1.1.1.1: icmp_seq=10 ttl=58 time=109 ms
64 bytes from 1.1.1.1: icmp_seq=11 ttl=58 time=106 ms
64 bytes from 1.1.1.1: icmp_seq=12 ttl=58 time=104 ms

At the same time, the sites do not open through firefox.

Can you run the following command inside your VPN qube and retry to load a website from a qube using it as netvm?

sudo /usr/lib/qubes/qubes-setup-dnat-to-ns

Unfortunately, nothing has changed. I forgot to mention that I use Qubes OS 4.2 .

Moreover I have other qube which is appVM and I’ve setup VPN there in standard way via network manager and via this qube everything work normal.

This could be due to the way the application handles network requests. Can you run both commands inside the VPN qube and share the output here?

sudo nft list ruleset
cat /etc/resolv.conf

Here it is:

table ip qubes {
	set downstream {
		type ipv4_addr
		elements = { 10.137.0.18 }
	}

	set allowed {
		type ifname . ipv4_addr
		elements = { "vif15.0" . 10.137.0.18 }
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 3 bytes 893 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 1 bytes 266
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}

	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
		ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 7 bytes 508 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
		ip saddr 10.137.0.18 jump qbs-10-137-0-18
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifname != "vif*" ip saddr 10.137.0.18 drop
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
		oifname != "vif*" ip daddr 10.137.0.18 drop
	}

	chain qbs-10-137-0-18 {
		accept
		reject with icmp admin-prohibited
	}
}
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}
}
table ip filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		iifname "vif13.0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		iifname "eth0" ct mark 0xe1f1  counter packets 1448 bytes 794288 accept
		iifname "vif13.0"  counter packets 0 bytes 0 drop
		iifname "eth0"  counter packets 27 bytes 1248 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		oifname "vif13.0" meta mark 0x0000e1f1  counter packets 0 bytes 0 ct mark set mark
		oifname "vif13.0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		oifname "eth0" meta mark 0x0000e1f1  counter packets 1878 bytes 391256 ct mark set mark
		oifname "eth0" ct mark 0xe1f1  counter packets 1881 bytes 392560 accept
		oifname "vif13.0"  counter packets 0 bytes 0 drop
		oifname "eth0"  counter packets 1 bytes 73 drop
	}
}
table ip6 filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		iifname "vif13.0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		iifname "eth0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		iifname "vif13.0"  counter packets 0 bytes 0 drop
		iifname "eth0"  counter packets 0 bytes 0 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		oifname "vif13.0" meta mark 0x0000e1f1  counter packets 0 bytes 0 ct mark set mark
		oifname "vif13.0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		oifname "eth0" meta mark 0x0000e1f1  counter packets 0 bytes 0 ct mark set mark
		oifname "eth0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		oifname "vif13.0"  counter packets 0 bytes 0 drop
		oifname "eth0"  counter packets 0 bytes 0 drop
	}
}
table inet qubes-nat-accel {
	flowtable qubes-accel {
		hook ingress priority filter
		devices = { eth0, lo, nordlynx, vif15.0 }
	}

	chain qubes-accel {
		type filter hook forward priority filter + 5; policy accept;
		meta l4proto { tcp, udp } iifgroup 2 oifgroup 1 flow add @qubes-accel
		counter packets 294 bytes 23035
	}
}
nameserver 10.139.1.1
nameserver 10.139.1.2

Was the VPN active when you ran these commands? If not, run them again with it connected to a server.

From what I can see of the results of your commands, it doesn’t seem to be changing the DNS or adding any related rules to nftables.

Just in case, I executed this command “sudo net list ruleset” again. The result of the second command is exactly the same as it was before. Now I’m sure that the VPN is turned on. Here is the result:

table ip qubes {
	set downstream {
		type ipv4_addr
	}

	set allowed {
		type ifname . ipv4_addr
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 1 bytes 40 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 2 bytes 266
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}

	chain dnat-dns {
		type nat hook prerouting priority dstnat; policy accept;
		ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
		ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
		ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}
table ip qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifname != "vif*" ip saddr 10.137.0.18 drop
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
		oifname != "vif*" ip daddr 10.137.0.18 drop
	}
}
table ip6 qubes-firewall {
	chain forward {
		type filter hook forward priority filter; policy drop;
		ct state established,related accept
		iifname != "vif*" accept
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
	}

	chain postrouting {
		type filter hook postrouting priority raw; policy accept;
	}
}
table ip filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		iifname "eth0" ct mark 0xe1f1  counter packets 2261 bytes 1364833 accept
		iifname "eth0"  counter packets 31 bytes 12971 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		oifname "eth0" meta mark 0x0000e1f1  counter packets 2614 bytes 465105 ct mark set mark
		oifname "eth0" ct mark 0xe1f1  counter packets 2614 bytes 465105 accept
		oifname "eth0"  counter packets 749 bytes 59115 drop
	}
}
table ip6 filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		iifname "eth0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		iifname "eth0"  counter packets 0 bytes 0 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		oifname "eth0" meta mark 0x0000e1f1  counter packets 0 bytes 0 ct mark set mark
		oifname "eth0" ct mark 0xe1f1  counter packets 0 bytes 0 accept
		oifname "eth0"  counter packets 4 bytes 224 drop
	}
}

Based on this link, they seem to use 103.86.96.100 and 103.86.99.100 for their DNS.

Can you run the following commands and test your clients again?

sudo nft flush chain ip qubes dnat-dns
sudo nft add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to 103.86.96.100
sudo nft add rule ip qubes dnat-dns meta l4proto { tcp, udp } ip daddr { 10.139.1.1, 10.139.1.2 } th dport 53 dnat to 103.86.99.100

Yup! Ths a lot! That’s really work!

After a while, unfortunately, I found that the solution described above partially solves the problem. Internet did appear on Qubes that receive Internet from a VPN Qube, but unfortunately it turned out that there is an unknown traffic leak. That is, I enabled killswitch feature in the VPN Qube, and when there is a discount from the VPN, there is no traffic leak on the VPN Qube itself, but in some incomprehensible way, the Internet continues to flow to Qubes connected to the VPN Qube, and the Internet naturally goes neither through the VPN provider, but directly from the sys-net. Maybe someone has an idea how it works?

This would mean that the killswitch provided by your VPN provider is not 100% leak-proof. If it still accepts traffic to be forwarded even outside the VPN tunnel, then this is a problem with how they handle their firewall system. If done correctly, everything should be locked down and no packets should be able to go anywhere after they reach the system running the VPN application. You should probably use the Qubes firewall as a last resort to try to limit the leaks.

It’s quite possible that the killswitch only operates on traffic
originating from the VPN host, and not traffic which is routed
through the qube.
This may not be any reflection on the provider,depending on how they
expect the VPN to be used.
Without knowing more details of the provider and your set up it’s
difficult to speculate.