Static MAC address but only a connection in sys-net

I need a little help. I’ve followed this page

sys-net mac stays the same but other vms I open have random addresses. I can ping an IP in sys-net so connection is up there but a VM connected to sys-net doesn’t have an internet connection. Latest version of qubes. The network I am on limits number of mac addresses that are allowed to connect to the network from my account. I need it to be seen as only one device for my qubes machine. Any ideas on what I can do?

1 Like

Only the mac address of your network controller connected to sys-net will be visible on your network. Also, all qubes use the same mac address.
When you say you don’t have Internet in a qube, what exactly have you tried?

The internet test I am doing is pinging google’s dns. sys-net works just fine. Personal VM has 100% packet loss.

In a terminal in your “Personal” qube, can you share the output of each command?

ip a
ip r
sudo nft list ruleset
sudo systemctl status qubes-network-uplink@eth0.service
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group 1 qlen 1000
    link/ether 00:16:3e:5e:6c:00 brd ff:ff:ff:ff:ff:ff
    inet 10.137.0.12/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe5e:6c00/64 scope link 
       valid_lft forever preferred_lft forever

$ ip r
default via 10.138.13.53 dev eth0 onlink 
10.138.13.53 dev eth0 scope link 

$ sudo nft list ruleset
table ip qubes {
	set downstream {
		type ipv4_addr
	}

	set allowed {
		type ifname . ipv4_addr
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip saddr @downstream counter packets 0 bytes 0 drop
	}

	chain antispoof {
		iifname . ip saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 meta l4proto icmp accept
		iif "lo" accept
		iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}
table ip6 qubes {
	set downstream {
		type ipv6_addr
	}

	set allowed {
		type ifname . ipv6_addr
	}

	chain antispoof {
		iifname . ip6 saddr @allowed accept
		counter packets 0 bytes 0 drop
	}

	chain prerouting {
		type filter hook prerouting priority raw; policy accept;
		iifgroup 2 goto antispoof
		ip6 saddr @downstream counter packets 0 bytes 0 drop
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		oifgroup 2 accept
		oif "lo" accept
		masquerade
	}

	chain _icmpv6 {
		meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
		icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
		accept
	}

	chain input {
		type filter hook input priority filter; policy drop;
		jump custom-input
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		iifgroup 2 goto _icmpv6
		iif "lo" accept
		ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
		meta l4proto ipv6-icmp accept
		counter packets 0 bytes 0
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		jump custom-forward
		ct state invalid counter packets 0 bytes 0 drop
		ct state established,related accept
		oifgroup 2 counter packets 0 bytes 0 drop
	}

	chain custom-input {
	}

	chain custom-forward {
	}
}

$ sudo systemctl status qubes-network-uplink@eth0.service
● qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup
     Loaded: loaded (/lib/systemd/system/qubes-network-uplink@.service; static)
     Active: active (exited) since Fri 2024-04-12 22:52:55 BST; 1h 3min ago
    Process: 531 ExecStart=/usr/lib/qubes/setup-ip add eth0 (code=exited, status=0/SUCCESS)
   Main PID: 531 (code=exited, status=0/SUCCESS)
        CPU: 24ms

Apr 12 22:52:55 personal systemd[1]: Starting qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup...
Apr 12 22:52:55 personal systemd[1]: Finished qubes-network-uplink@eth0.service - Qubes network uplink (eth0) setup.

$

It seems fine.
Can you do the same thing in sys-net now?

Can you also confirm that you are also unable to ping anything in sys-firewall?

ping 1.1.1.1

Have you tried with different templates like debian or with whonix if you have it?
Also, do you have a phone with tethering capabilities or another network you can connect to? Just to test and confirm that it’s a problem with Qubes and not the network itself.

I forgot to ask: Did the network issue start after you followed the thread you were talking about, or were you never able to reach anything from qubes like “Personal” in the first place?

ip a produces my real MAC address (in sys-net). sys-firewall doesn’t have a connection. pinging 1.1.1.1 produces the same results. Different templates including whonix, fedora and debian produce the same results.

Using a different network, it works fine. Qubes cannot connect to this network specifically (which I want it to). The network I am trying to connect to works just fine as it can be seen I can ping in sys-net and I have completely fine connection on other computers I have connected to the network some with debian, arch and windows. If I set the MAC as static and whitelist on my account with the network, it works. The network is paid for and has a maximum number of devices per account. Qubes must be seen as one device but it clearly isn’t somehow by the network. It must be somehow detecting a VM with fake MAC addresses within them as different computers despite all connecting through sys-net.

I’m not sure how it works in Qubes OS and whatever it fixes the TTL in qubes or not but maybe your network provider is checking the TTL of the packets.

I’ve checked and the TTL is changing in the qubes so I guess that’s how your network provider is filtering the packets.
Try to fix TTL in sys-net and check if it’ll help.

Try to run these commands in sys-net and check the networking in sys-firewall:

sudo nft add chain ip qubes mangle_forward '{ type filter hook forward priority mangle; policy accept; }'
sudo nft add rule ip qubes mangle_forward oifname eth0 ip ttl set 65

Change eth0 to the name of your network interface in sys-net.
If it won’t work then maybe change ttl from 65 to 64.

Done this. No effect.

Did you try with 64 ttl?

sudo nft add chain ip qubes mangle_forward '{ type filter hook forward priority mangle; policy accept; }'
sudo nft flush chain ip qubes mangle_forward
sudo nft add rule ip qubes mangle_forward oifname eth0 ip ttl set 64

I’ve done that. No difference. If I pass through a VPN on sys-net, I can get an internet connection. I cannot however resolve domain names in anything other that through whonix as domains are resolved with tor there. Setting resolv.conf to open ones in sys-net and others lime personal has no effect.

How did you set up the VPN?
Are you able to access domains when you run this while the VPN is active?

/usr/lib/qubes/qubes-setup-dnat-to-ns

I had it running mullvad within sys-net. Afyer uninstalling and reindtalling with the same config, DNS resolution works. I don’t know why.

Since you can access the Internet with Mullvad, you should create a new qube for it instead of running it on sys-net and then test if you can still reach the outside.

You can follow one of them depending on whether you want to use the application or not:

1 Like

In that case I’d run the wireshark in sys-net and compare the packets coming out of eth0 when you run ping -c 1 9.9.9.9 from sys-net and from sys-firewall to see what’s the difference.

You network provider can also change the TTL of the incoming packets to your sys-net to 1 so sys-net could drop the packets before forwarding them to sys-firewall. You can also try to change the TTL of the incoming packets in sys-net for a test:

sudo nft add chain ip qubes mangle_forward '{ type filter hook forward priority mangle; policy accept; }'
sudo nft flush chain ip qubes mangle_forward
sudo nft add rule ip qubes mangle_forward oifname eth0 ip ttl set 64
sudo nft add rule ip qubes mangle_forward oifgroup 2 ip ttl set 64