I’m using openconnect (Cisco Anyconnect) for an institution I’m associated with. I can get the VPN working inside the respective AppVM, but any qubes I connect are blocked. I think it’s a routing issue, but the last two hours of searching has turned up nothing.
What’s the output of these commands in VPNVM when openconnect is connected?
ip rule
ip route
ip rule
:
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
ip route
:
default dev tun0 scope link
<connected-appvm-ip> dev vif46.0 scope link metric 32706
<firewallvm-ip> dev eth0 scope link
<ip-related-to-vpn> via <firewallvm-ip> dev tun0
<different-ip-related-to-vpn> dev tun0 scope link
I’ve been experimenting with the routes, and this was the original.
Routes looks good,
Did you try to ping some IP instead of a domain name in the qubes connected to VPNVM?
Maybe it’s an issue with DNS resolution in the qubes?
Pinged the ips related to VPN server and nothing. Pinged 1.1.1.1 and nothing.
Again, everything works connected to VPN in VPNVM. (i.e. browsing, ping, etc.)
Wait a minute. Now pinging 1.1.1.1 is working.
Check the firewall rules in VPNVM, maybe openconnect adds some forward-blocking rules?
sudo nft list ruleset
If it’s a DNS issue then you can try this:
Wireguard VPN setup
Okay, now pinging the institution’s IP is working. It is looking more and more a DNS issue.
nft rules incoming.
table ip qubes {
set downstream {
type ipv4_addr
}
set allowed {
type ifname . ipv4_addr
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip saddr @downstream counter packets 0 bytes 0 drop
}
chain antispoof {
iifname . ip saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 meta l4proto icmp accept
iif "lo" accept
iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
}
table ip6 qubes {
set downstream {
type ipv6_addr
}
set allowed {
type ifname . ipv6_addr
}
chain antispoof {
iifname . ip6 saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip6 saddr @downstream counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain _icmpv6 {
meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
accept
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 goto _icmpv6
iif "lo" accept
ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
meta l4proto ipv6-icmp accept
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
}
The rules are good.
Then I guess it’s a DNS issue.