[qubes-users] Update Killed All Non-Tor Internet Access!

Just some feedback, that workaround works great :slight_smile:

Failed to issue method call: no such file or directory

Wired Network suddenly stopped working but I did NOT recently update the
system. Tried with another ethernet cable (working with other computer).
Also tried running another live linux distribution to check if it is an
hardware problem, but the other linux works.

[user@netvm ~]$ ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
        inet 127.0.0.1 netmask 255.0.0.0
        inet6 ::1 prefixlen 128 scopeid 0x10<host>
        loop txqueuelen 0 (Local Loopback)
        RX packets 0 bytes 0 (0.0 B)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 0 bytes 0 (0.0 B)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vif2.0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet 10.137.1.1 netmask 255.255.255.255 broadcast 0.0.0.0
        inet6 fe80::fcff:ffff:feff:ffff prefixlen 64 scopeid 0x20<link>
        ether fe:ff:ff:ff:ff:ff txqueuelen 32 (Ethernet)
        RX packets 58 bytes 5333 (5.2 KiB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 53 bytes 5753 (5.6 KiB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[user@netvm ~] ping google\.com ping: unknown host google\.com \[user@netvm \~\]

any idea
best
Franz

ifconfig -a ?
I can't see eth0 above, so this can be a network driver issue.

Just wanted to summarize my own experience with working around this bug, in case it’s helpful to others:

Symptoms:

  • appVMs (e.g. work) would just time out on any network access, failing to resolve hostname via DNS
  • running dig in appVM would fail the same way (e.g. dig www.ucla.edu gets no nameserver response)
  • running same dig command in netvm works!
  • netvm is able to ping successfully (e.g. ping www.ucla.edu)
  • other appVMs are able to ping successfully if given the IP address (e.g. ping 128.97.27.37)
  • netvm and other appVMs show quite different nameserver IP in /etc/resolv.conf, e.g. netvm:
    [user@netvm ~]$ cat /etc/resolv.conf

Generated by NetworkManager

nameserver 192.168.1.1

[user@work ~]$ cat /etc/resolv.conf
nameserver 10.137.2.1
nameserver 10.137.2.254

  • other appVM can dig successfully if given explicit DNS server IP address (e.g. dig @192.168.1.1 www.ucla.edu)

  • running Marek’s test query in netvm shows no mapping of “bad” nameserver address to “good” nameserver address
    (sudo iptables -t nat -nvL PR-QBS)

  • running Marek’s work-around command in netvm indeed adds nameserver mapping
    [user@netvm ~] sudo sh /usr/lib/qubes/qubes_setup_dnat_to_ns [user@netvm ~] sudo iptables -t nat -nvL PR-QBS
    Chain PR-QBS (1 references)
    pkts bytes target prot opt in out source destination
    201 13265 DNAT udp – * * 0.0.0.0/0 10.137.1.1 udp dpt:53 to:192.168.1.1

  • other appVMs can now resolve hostnames successfully (e.g. dig www.ucla.edu).

Success! Presumably you have to re-run the workaround if you reboot netvm.

I’m curious how fedora can be rolling out such buggy updates to production f18 users?!? Is the problem that this bug would affect almost no one except Qubes users with this complicated netvm - appVM separation?

ifconfig -a gives exactly the same. No mention of eth0.

Do I have to reinstall Qubes?
Best
Franz

Check if you still have network device assigned to your netvm (in Qubes
manager or qvm-prefs/qvm-pci). If you have, check kernel messages (dmesg) in
netvm.

Yes, Qubes networking relies on NetworkManager dispatch scripts, which are
currently broken in Fedora. Related bugreport:

You need to run 'systemctl enable NetworkManager-dispatcher.service' in
template (then reboot).

Just wanted to summarize my own experience with working around this bug, in
case it's helpful to others:

Symptoms:
* appVMs (e.g. work) would just time out on any network access, failing to
resolve hostname via DNS
* running dig in appVM would fail the same way (e.g. dig www.ucla.edu gets
no nameserver response)
* running same dig command in netvm works!
* netvm is able to ping successfully (e.g. ping www.ucla.edu)
* other appVMs are able to ping successfully if given the IP address (e.g.
ping 128.97.27.37)
* netvm and other appVMs show quite different nameserver IP in
/etc/resolv.conf, e.g. netvm:
[user@netvm ~]$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.1.1

[user@work ~]$ cat /etc/resolv.conf
nameserver 10.137.2.1
nameserver 10.137.2.254

* other appVM can dig successfully if given explicit DNS server IP address
(e.g. dig @192.168.1.1 www.ucla.edu)

* running Marek's test query in netvm shows no mapping of "bad" nameserver
address to "good" nameserver address
(sudo iptables -t nat -nvL PR-QBS)

* running Marek's work-around command in netvm indeed adds nameserver
mapping
[user@netvm ~]$ sudo sh /usr/lib/qubes/qubes_setup_dnat_to_ns
[user@netvm ~]$ sudo iptables -t nat -nvL PR-QBS
Chain PR-QBS (1 references)
  pkts bytes target prot opt in out source
destination
   201 13265 DNAT udp -- * * 0.0.0.0/0
10.137.1.1 udp dpt:53 to:192.168.1.1

* other appVMs can now resolve hostnames successfully (e.g. dig
www.ucla.edu).

Success! Presumably you have to re-run the workaround if you reboot netvm.

Not sure if you caught Marek's earlier message in this thread in which he stated this, but if you run

sudo systemctl enable NetworkManager-dispatcher.service

in the TemplateVM upon which your netvm is based, then you should not have to re-run the workaround fix above after each netvm reboot.

I'm curious how fedora can be rolling out such buggy updates to production
f18 users?!? Is the problem that this bug would affect almost no one
except Qubes users with this complicated netvm - appVM separation?

Yes, I wonder about that too. I suspect it must be something like that.

failed to issue method call: No such file or directory

Best Franz

No I do not have any network device assigned to my netvm.
Best
Franz

So... assign some (especially that you want to use) :slight_smile:

IT WORKS MAREK IT WORKS!!
I just assigned ethernet controller and network controller, rebooted the
machine and it works.

Many thanks. Well, I'll need to understand how this bitcoin thing work to
send some bitcoin to you.
Best
Franz

Marek Marczykowski-Górecki:

Yes, Qubes networking relies on NetworkManager dispatch scripts, which are
currently broken in Fedora. Related bugreport:
https://bugzilla.redhat.com/show_bug.cgi?id=974811

You need to run 'systemctl enable NetworkManager-dispatcher.service' in
template (then reboot).

Just ran into this bug. Running that systemctl command fixed it for me.
Thanks.

~abel

I'm now experiencing an annoying variant of this bug: now even the netvm fails to name resolve, off a standard WiFi connection. it can ping by IP address just fine, but no DNS. WiFi connections have always worked automatically in the past, so this seems like I'm being bitten by the same Network Manager bug in a new way. Anyone have any advice?

neither of Marek's workarounds solves this.

$%&#ing Fedora "updates"!

I'm not really sure if this is the same issue, but I was having DNS issues also. I'm not exactly sure if this is supposed to be like this:

[root@netvm ~]# iptables -t nat -nvL PR-QBS
Chain PR-QBS (1 references)
  pkts bytes target prot opt in out source destination
     0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.1 udp dpt:53 to:10.137.2.1
     0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.254 udp dpt:53 to:10.137.2.254

The DNAT to 10.137.2.x is my firewallvm. I'm not sure if there is a bug here, but it doesn't make sense to me (yet) why netvm would use the firewallvm for DNS resolution.

My fix was to add to run the following in netvm which resolves the DNS issue:

# Use Google DNS
sudo iptables -t nat -I PR-QBS -p udp -d $DNS1 --dport 53 -j DNAT --to 8.8.8.8
sudo iptables -t nat -I PR-QBS -p udp -d $DNS2 --dport 53 -j DNAT --to 8.8.4.4

Also, of note, I added above commands to rc.local in my netvm, but it appears that rc.local is run prior to the firewall being configured in the netvm, so these rules get overwritten. Currently, I am running the script manually after a reboot. Should the qubes_firewall_user_script be executed in non proxy VMs? It doesn't appear to be run in netvm. I think this would be a worthwhile addition to allow iptables rule customizations that persist across reboots.

TJ.

I am v.surprised one of the major distros has not fixed a regression bug after many weeks which breaks networking on a normal update

What's the protocol to light a fire under this?

?

CB

I'm now experiencing an annoying variant of this bug: now even the netvm
fails to name resolve, off a standard WiFi connection. it can ping by IP
address just fine, but no DNS. WiFi connections have always worked
automatically in the past, so this seems like I'm being bitten by the same
Network Manager bug in a new way. Anyone have any advice?

neither of Marek's workarounds solves this.

$%&#ing Fedora "updates"!

I'm not really sure if this is the same issue, but I was having DNS issues
also. I'm not exactly sure if this is supposed to be like this:

[root@netvm ~]# iptables -t nat -nvL PR-QBS
Chain PR-QBS (1 references)
pkts bytes target prot opt in out source destination
    0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.1
udp dpt:53 to:10.137.2.1
    0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.254
udp dpt:53 to:10.137.2.254

The DNAT to 10.137.2.x is my firewallvm. I'm not sure if there is a bug here,
but it doesn't make sense to me (yet) why netvm would use the firewallvm for
DNS resolution.

I think this is the same bug - qubes script isn't called by NetworkManager, so
PR-QBS rules aren't regenerated when entries in /etc/resolv.conf is set.
Have you tried "sudo systemctl enable NetworkManager-dispatcher.service" in
template?

Above strange entries are from the fact that at netvm startup /etc/resolv.conf
is in the state as in templatevm (which sends DNS to firewallvm), so initial
firewall rules points at firewallvm.

My fix was to add to run the following in netvm which resolves the DNS issue:

# Use Google DNS
sudo iptables -t nat -I PR-QBS -p udp -d $DNS1 --dport 53 -j DNAT --to 8.8.8.8
sudo iptables -t nat -I PR-QBS -p udp -d $DNS2 --dport 53 -j DNAT --to 8.8.4.4

Also, of note, I added above commands to rc.local in my netvm, but it appears
that rc.local is run prior to the firewall being configured in the netvm, so
these rules get overwritten. Currently, I am running the script manually
after a reboot. Should the qubes_firewall_user_script be executed in non
proxy VMs? It doesn't appear to be run in netvm. I think this would be a
worthwhile addition to allow iptables rule customizations that persist across
reboots.

You can also regenerate DNAT rules based on /etc/resolv.conf by "sudo sh
/usr/lib/qubes/qubes_setup_dnat_to_ns".

I'm now experiencing an annoying variant of this bug: now even the netvm
fails to name resolve, off a standard WiFi connection. it can ping by IP
address just fine, but no DNS. WiFi connections have always worked
automatically in the past, so this seems like I'm being bitten by the same
Network Manager bug in a new way. Anyone have any advice?

neither of Marek's workarounds solves this.

$%&#ing Fedora "updates"!

I'm not really sure if this is the same issue, but I was having DNS issues
also. I'm not exactly sure if this is supposed to be like this:

[root@netvm ~]# iptables -t nat -nvL PR-QBS
Chain PR-QBS (1 references)
  pkts bytes target prot opt in out source destination
     0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.1
udp dpt:53 to:10.137.2.1
     0 0 DNAT udp -- * * 0.0.0.0/0 10.137.1.254
udp dpt:53 to:10.137.2.254

The DNAT to 10.137.2.x is my firewallvm. I'm not sure if there is a bug here,
but it doesn't make sense to me (yet) why netvm would use the firewallvm for
DNS resolution.

I think this is the same bug - qubes script isn't called by NetworkManager, so
PR-QBS rules aren't regenerated when entries in /etc/resolv.conf is set.
Have you tried "sudo systemctl enable NetworkManager-dispatcher.service" in
template?

Yes, this does in fact resolve the DNS issues. Qubes prerouting now looks like this:

[root@netvm config]# iptables -t nat -nvL PR-QBS
Chain PR-QBS (1 references)
  pkts bytes target prot opt in out source destination
    10 678 DNAT udp -- * * 0.0.0.0/0 10.137.1.1 udp dpt:53 to:10.0.1.1

However, I continue to have the issue that there is no good place to add custom firewall rules in a netvm, such as forwarding services from the outside world to a particular appvm. Any iptables rules set in rc.local are overwritten by qubes' scripts when a connection to the network is established. Would the addition of calling the qubes_firewall_user_script in non-proxy VMs be considered? This would be a logical location for user customization of the firewall. Even in an appvm this would be beneficial (e.g. to accept the incoming services forwarded there).

Some time ago there was thread about external services on qubes-devel.

For now, you can add own scripts to /etc/NetworkManager/dispatcher.d (called
after establishing network connection). NetworkManager is active only in netvm.
If you do not want to place them in template, you can store them in /rw/config
and copy/symlink to /etc in rc.local.

I know this is an old thread, but it's still relevant! I tried the
approach you suggested. I wrote an /rw/config/rc.local that looks like
this:

#!/bin/sh
cp rc.local2 /etc/NetworkManager/dispatcher.d

The script works if I run it manually. The problem seems to be that the
network VM no longer invokes /rw/config/rc.local.

Rob