Isolating AppVM Network Traffic in Qubes OS using Network Namespaces

Hello,

I’m trying to implement a feature in Qubes OS that would allow each AppVM to use a separate VPN tunnel, potentially with different providers and regions, simultaneously. To achieve this, I’m exploring the use of temporary network namespaces for each AppVM.

The idea is to create a new network namespace for each AppVM, and then establish a VPN tunnel within that namespace. This would allow each AppVM to have its own isolated network connection, with its own VPN configuration, without affecting the other AppVMs.

To make this work, I need to be able to:

  • Create a new network namespace for each AppVM
  • Move the AppVM’s vif from sys-firewall to the new namespace
  • Establish a VPN tunnel within the namespace
  • Route traffic from the AppVM through the VPN tunnel
  • Allow multiple AppVMs to have their own separate VPN tunnels, with different providers and regions, simultaneously

However, I’m experiencing a problem when I try to attach the AppVM’s vif from sys-firewall to the namespace. Whenever I do this, the AppVM is no longer able to route traffic properly, and nothing becomes pingable. It seems like the routing table is not being updated correctly, or the traffic is not being forwarded as expected.

I’ve tried using veth pairs to connect the network namespaces to sys-firewall, but I’m having trouble getting the traffic to be forwarded correctly. I’ve also tried enabling IP forwarding and setting up routes, but it’s not working as expected.

Can anyone provide guidance on how to achieve this in Qubes OS, or point me in the direction of any relevant documentation or examples? I’d appreciate any help or advice on how to make this work.

Below are the commands I set in the sys-firewall to make this work:

ip netns add test
ip link add veth0 type veth peer name veth1
ip link set veth1 netns test
ip link set <appvm_vif> netns test
ip netns exec test ip link set <appvm_vif> up
ip netns exec test ip link set veth1 up
ip link set veth0 up

After running these commands, the AppVM is no longer able to ping anything, and the routing table seems to be incorrect. Can anyone help me figure out what I’m doing wrong, or provide a working example of how to isolate an AppVM’s network traffic in Qubes OS?"

All this in a single netvm?

Yes, the idea would be to use sys-firewall for this, since it holds all the vifs for all the AppVms, so it is easier to use a single sys vm for this.

Why don’t you use create a separate sys-vpn for each VPN configuration you want to use, and then assign each of them to the right AppVM as network VM. You shouldn’t need to use namespaces nor touch sys-firewall at all for that. Or am I misunderstanding what you want to accomplish?

Simply put, creating a separate sys-vpn for each VPN configuration is not a scalable solution due to the significant resource allocation required to support each individual tunnel, resulting in a large footprint. You can always kill a fly with a cannon, but that would be suboptimal.
In contrast, leveraging network namespaces provides a more efficient and scalable approach, enabling dynamic management of multiple isolated network connections while minimizing resource utilization and overhead.

1 Like

It seems that no one has a clear answer to this question?

There is similar feature implemented already. It isn’t for VPN, but for extra NAT. Take a look at qubes-core-agent-linux/network/vif-qubes-nat.sh at main · QubesOS/qubes-core-agent-linux · GitHub and its usage in vif-route-qubes script.
This specific feature can be enabled by setting net.fake-ip feature on the AppVM (see qvm-features man page).

1 Like

Hi Marmarek,

Thanks for sharing this information. I wasn’t aware that Qubes has an existing implementation using namespaces, although it serves a different purpose. The script sets up a NATed network connection between the AppVM and the NetVM, allowing the AppVM to access the external network through the NetVM while hiding its own IP address.

However, my goal is to implement a custom solution using namespaces to achieve a similar goal. For now, I’m focused on enabling AppVM connectivity to the internet while behind a dedicated namespace on the sys-firewall, without worrying about having a different VPN config for each namespace.

To summarize, I created a namespace for the test AppVM, moved the corresponding vif14.0 interface to the namespace, created a veth pair, and assigned IP addresses to the veth pair. Specifically, I ran the following commands on the sys-firewall:

ip netns add test
ip link set vif14.0 netns test
ip link add name veth-vif14 type veth peer name veth-vif14-peer
ip link set veth-vif14-peer netns test
ip addr add 10.137.0.7/32 dev veth-vif14
ip netns exec test ip addr add 10.137.0.22/32 dev veth-vif14-peer

After this, I updated the nft table to allow traffic from the AppVM to the sys-firewall by adding the following rule:

nft add element ip qubes allowed { “veth-vif14” . 10.137.0.22 }

I also added a route to the sys-firewall’s routing table to point to the veth-vif14 interface:

ip route add 10.137.0.22 dev veth-vif14 scope link metric 32738

However, despite these efforts, I was unable to get the AppVM to ping the sys-firewall or anything on the internet. I tried troubleshooting the issue by checking the nft table, IP addresses, and routes, but I was unable to resolve the problem. The AppVM still has a single default route to the sys-firewall:

default via 10.137.0.7 dev eth0 onlink
10.137.0.7 dev eth0 scope link

I also reviewed the vif-qubes-nat.sh script, which seems to achieve a similar goal to my custom implementation. Unfortunately, even after reviewing the script and trying to apply the same concepts, I was still unable to get the AppVM to ping the sys-firewall.

At this point, I’m unsure what I’m missing. I’ve tried to configure the namespace, veth pair, and IP addresses correctly, and I’ve updated the nft table and routing configuration accordingly. I’m starting to think that there might be something else that I need to configure or enable in order to get this working. If you have any further suggestions or guidance, I would greatly appreciate it.

Thanks!

I don’t see enabling ip_forward, but that should be enabled by default I think.

Another idea is, I don’t see you setting mac address on the veth ends. Qubes network scripts add static neighbor entries, so if this doesn’t match, it may cause issues. See how it’s done in the script I linked before.

Hi Marmarek,

Thanks for the suggestions!

I’ve tried again with the updated configuration, this time changing the MAC address of the veth pair to match the other vif interfaces and including the neighbor configuration.

I created a namespace and moved the vif16.0 interface to it, then created a veth pair and assigned IP addresses to it. I also updated the MAC address of the veth pair to fe:ff:ff:ff:ff:ff and added the neighbor entries to the configuration.
These are the commands I’m invoking:

ip netns add test
ip link set vif17.0 netns test
ip link add name veth-vif17 type veth peer name veth-vif17-peer
ip link set veth-vif17-peer netns test
ip addr add 10.137.0.7/32 dev veth-vif17
ip netns exec test ip addr add 10.137.0.22/32 dev veth-vif17-peer
ip link set veth-vif17 address fe:ff:ff:ff:ff:ff
ip netns exec test ip link set veth-vif17-peer address fe:ff:ff:ff:ff:ff
ip link set veth-vif17 up
ip netns exec test ip link set veth-vif17-peer up
ip neighbor add 10.137.0.22 dev veth-vif17 lladdr fe:ff:ff:ff:ff:ff nud permanent
ip netns exec test ip neighbor add 10.137.0.7 dev veth-vif17-peer lladdr fe:ff:ff:ff:ff:ff nud permanent
nft delete element ip qubes allowed { “vif17” . 10.137.0.22 }
nft add element ip qubes allowed { “veth-vif17” . 10.137.0.22 }
ip route add 10.137.0.22 dev veth-vif17 scope link metric 32745

This is the output I get when invoking ‘ip address show’ for the veth-vif17 interface, which shows the MAC is now being set correctly:

21: veth-vif17@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff link-netns test
inet 10.137.0.7/32 scope global veth-vif17
valid_lft forever preferred_lft forever
inet6 fe80::fcff:ffff:feff:ffff/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever

I also checked and ip_forward is enabled:

cat /proc/sys/net/ipv4/ip_forward
1

However, despite these changes, I’m still unable to ping the IP address of the sys-firewall (10.137.0.7), which is the default route for the AppVM.

If you have any further suggestions or ideas, I’d be happy to hear them. I’m still trying to get this working, but it’s proving to be more challenging than I expected.

Thanks again for your help and suggestions!