Yes I used wireguard and I generated the configuration here.
I followed this installation guide I kept from some time ago, it’s simple to follow.
Yes I used wireguard and I generated the configuration here.
I followed this installation guide I kept from some time ago, it’s simple to follow.
Port 2049, standard dns server, IPv4 and IPv6. I tried Germany and Netherlands
So what exactly does not work?
In this setup:
test-appvm → tasket-ivpn → sys-firewall → sys-net
Can you ping 9.9.9.9 or anything else from test-appvm? Does curl/firefox ip.me work in test-appvm?
Tasket and the official IVPN guide work now!!!
It*s really slow probably need to change the MTU size. Can you pls explain how I test/ what is the best size?
ipvn-proxy(mullvad guide)------->mullvad(mullvad guide)------->sys......
still refuses to work still no dns whatsoever
ivpn-tasket------>mullvad(mullvad guide)------>sys...
works slowly
ivpn------>sys
works
ivpn-gui----->mullvad(mullvad guide)------->sys...
works slowly
ivpn-tasket----->mullvad-tasket----->sys...
works slowly
Also I am an extremely brain dead because I often used the Mullvad-Browser to test for DNS leaks (apparently it uses it’s own dns). So some configuration might have actually worked sorry…
But dig works now in pretty much every VM. I don’t know what was wrong.
Add MTU = 1280
in both wireguard configurations (under [Interface]
) then restart the VMs.
Also, try to use the following command in the iVPN AppVM:
sudo iptables -t nat -I POSTROUTING 3 -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
What does this do ?
You can try to find the MTU values for your VMs with ping:
ping -M do x.x.x.x -s yyyy
Where x.x.x.x - any reachable downstream IP address for this VM and yyyy - size of the packet (MSS).
Lower yyyy value from 1472 until the ping will start to work.
MTU = MSS + 28
Not sure how to handle your MTU issue properly.
The crude way would be to just set the interface MTU with ip command:
sudo ip link set dev eth0 mtu yyyy
Better way would be to allow ICMP fragmentation needed packets or to add TCPMSS clamp iptables rules. But I can’t tell if they will work for sure.
To make it simple, it makes sure that all AppVM attached to the VPN VM use the MTU that is set in the wireguard configuration so you don’t have to change all of them manually.
This post contains fixes for WireGuard VPN issues on PPPoE connections. These may consist of connection drops, timeouts or other intermittent issues.
Does the sever not normaly know the MTU size?
To make it simple, it makes sure that all AppVM attached to the VPN VM use the MTU that is set in the wireguard configuration so you don’t have to change all of them manually.
Okay that’s fantastic. Does it also work with icmp disabled?
Anyway huge thank’s for sticking with me trough this pain. Thank you so much!
I drop ICMP in sys-firewall and it works. Should not be a problem if you do the same.
Did you apply this in the rc.local?
You can place it there in the VPN VM yes.
Add
MTU = 1280
in both wireguard configurations (under[Interface]
) then restart the VMs.Also, try to use the following command in the iVPN AppVM:
sudo iptables -t nat -I POSTROUTING 3 -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
works really well with this thanks !
what would I need to change for Qubes 4.2 ? nftables
A pull request about nftables is pending in tasket vpn repo. Once it’s merged, you’ll have to update the VPN VM with the new version for 4.2.
sudo iptables -t nat -I POSTROUTING 3 -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
i mean this
without it I get no proper connection. Or should I just change the MTU manually. Or does the command do more than that
I can’t really help with this, I’m not familiar with nftables unfortunately.