So I would need to change the MTU size for every VM like this
manuallly? And it works? or does the command do more?
So I would need to change the MTU size for every VM like this
manuallly? And it works? or does the command do more?
You will have to do this manually on every single VM using the VPN VM yes.
After looking for a bit, it seems like you can translate an iptables rule to nftables using iptables-translate
This is what I get for the rule:
iptables-translate -A POSTROUTING -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
nft 'add rule ip filter POSTROUTING tcp flags syn / syn,rst counter tcp option maxseg size set rt mtu'
delay.
I tested a lot with adding tcpmss to various tables, when adding it to Postrouting in my case or just forward, the problems were solved with the mtu in clearnet, but whonix sdwdate could not adjust the time. Maybe if anyone has the same problems with whonix/tor, you can add these rules:
iptables -t mangle -A FORWARD -p tcp —tcp-flags SYN,RST SYN -j TCPMSS —clamp-mss-to-pmtu
nft 'add rule ip mangle FORWARD tcp flags syn / syn,rst counter tcp option maxseg size set rt mtu'
Recent discussion about MTU problem on Whonix forum:
Tor is not yet fully bootstrapped. 30 % done - KVM - Whonix Forum
The problem with whonix, I believe, was related to icmp and they are now testing the GATEWAY_ALLOW_INCOMING_ICMP_FRAG_NEEDED option. This is good news.
As for this topic, it is better to use FORWARD chain than POSTROUTING to solve mtu problems, there is no need to do mss clamping for all traffic.