Which IPs does Qubes 4.0.4 connect to on boot?

Hi. I’m not an IT person. But I did manage to install tcpdump on OpenWRT in a router. And I’m monitoring the traffic coming out of my Qubes OS prior to any browsing or VPN activity. I’m trying to see what IP addresses Qubes OS connects to on boot, in the attempt to detect any anomaly (AKA hack). Would such information be available, so I can compare what I’m seeing on my laptop with what is supposed to be standard? I assume that VM update verification takes place for the VMs connected to the Internet. Which IPs would those VMs connect to?
For example, I saw that on boot, my Qubes OS 4.0.4 (up-to-date; running Fedoras only) connected to the following. Is this considered normal? Thank you.

[edited for privacy]

Network Protocol Destination
IPV4 TCP 93.184.220.
IPV4 TCP 80.67.82.
IPV4 TCP 44.229.115.
IPV4 TCP 34.98.75.
IPV4 TCP 34.117.237.
IPV4 TCP 34.107.221.
IPV4 UDP 255.255.255.
IPV4 UDP 195.186.4.
IPV4 UDP 185.32.222.
IPV4 UDP 130.60.204.
IPV4 TCP 13.224.96.
IPV4 TCP 13.224.96.
IPV4 TCP 13.224.96.
IPV4 TCP 13.224.96.
IPV4 TCP 13.224.96.
IPV6 UDP [2001:8a8:
1 Like

did you try to do a reverse look up, or should we do it with a chance to find either qubes repos or tor entry nodes?

I did extensive research via search engine on the various IPs that I found. I’m not sure this qualifies as reverse look up. They mostly led to amazon services - but not all of them. Maybe one was the time server. But even in that case of amazon, how can I verify that it’s a legitimate connection and not (however unlikely) an exploit calling home which was located on an amazon server?

There is a tutorial for installing Suricata in the Firewall_VM. You can check sbillie awesome-security on GitHub. You can install etherape etc…
Most of the Amazon/Google spying is done on the browser level. What did you expect?

@oijawyun it would be helpful if you could say what qubes you have set
to autostart, and what templates they are using.
Are you using Whonix?

1 Like

Thank you for your replies. Here are some clarifications and additional questions.
I want to detect an evilmaid attack, or a BIOS level infection, or a permanent infection in sys-net. I am using Qubes because “something had happened” in the past with one of my laptops. Installing Suricata in the Firewall_VM would not address my problems.
As for which qubes auto-started, they are: sys-net and sys-firewall (fedora-33-minimal), sys-usb (disconnected from the Internet), and the Disposable VM based on fedora-33 through which I access the router. No Tor or VPN or any other VM. No website is accessed.
Ideally, I should not run Disposable VM based on fedora-33, and access the logs via another laptop. But I don’t know how to do that.
I know that VMs check for updates, and for the clock/time sync. Is there a way to disable these temporarily to reduce the “noise”?

In Global Settings, you can Set ClockVM to none, and disable the “Check
for updates” in dom0 and qubes.
You can also manually disable these services - qubes-update-check - with
qvm-service.

Thank you, @unman. I disabled the check for updates in dom0 and Qubes. I also set the clockVM to none. I also made sure to disable any communication from my Firefox prior to connecting to the router (no calling back home). No VPN or whonix were running. See below some results.

First observation: My laptop still reached out for the clock. At least 3 servers had ntp in their names when performing IP check. They were checking the time.

Second observation: My laptop still reached out to Amazonaws, Amazon CloudFront and Googleusercontact addresses. I assume these are all legitimate. But could someone confirm this? And how can I “silence” this “noise” further?

These are some results from my experiment:

server-13-224-96-62.zrh50.r.cloudfront.net:443
ec2-44-239-205-250.us-west-2.compute.amazonaws.com:443
82.221.107.34.bc.googleusercontent.com:80
82.221.107.34.bc.googleusercontent.com:80

Finally, the last one is IPv6 and don’t know how to search what it’s for.

Thanks everyone for your time and support.

Thanks to content delivery networks (CDNs), mirrors, and DNS CNAME usage, going from IP address to ‘what is the traffic doing’ isn’t straightforward for quick analysis.

The best you can determine without too much investigation is looking at the port numbers. For example 443(TCP) = HTTPS, 80(TCP) = HTTP, 53(UDP) = DNS, 123(UDP) = NTP, 67/68(UDP) = DHCP, 546/547(UDP) = DHCPv6.

If you have access to tcpdump on your router, you’d get more valuable information by monitoring the DNS requests, which would happen over UDP port 53, assuming your router or Qubes machine isn’t doing DNS-over-{HTTPS,TLS}/dnscrypt.

And as you’re on a router, you’d want to focus on DNS requests over the LAN interface (DNS the Qubes machine is doing) vs the WAN interface (DNS traffic to the internet to actually resolve host names). This would allow you to filter in case there are other machines on your LAN.

If you’re only worried about your qubes machine, you could run tcpdump within sys-net itself, save that to a file, transfer to another qube, and inspect via wireshark.

I’ve been able to get a quiet Qubes machine by:

  1. Disabling dom0 updates in Global Settings. This prevents checking for dom0 updates if sys-firewall is already up.
  2. Hitting ‘Disable checking for updates in all qubes’ in Global Settings. This prevents checking for updates 5 minutes after any qube boots. Alternatively, make sure qubes-update-check service is unchecked for running qubes.
  3. Setting ClockVM to ‘None’ in Global Settings. This prevents NTP traffic in the chosen qube.
  4. In Fedora templates, disabling the unbound-anchor service (sudo systemctl disable unbound-anchor.timer). This prevents obtaining DNS root anchor keys at boot.

Of course this comes at the cost of:

  1. Performing updates manually. Don’t hack me please.
  2. Syncronizing time occasionally.

I do this not for privacy or trying to be incognito, but to reduce data usage.

Thank you @icequbes1.

Upon completing the steps that you mentioned to get a quiet Qubes machine, and sshing to the router, I ran tcpdump on the specific interface. I need to learn more how to work with tcpdump further, to be able retrieve meaningful data. I will pursue my research (including how to move the tcpdump results from the router to the computer. Anyone knows how to do that? :slight_smile: )

However, I have one observation: I still saw multiple connections to NTP servers / domains / IPs. Could there be a problem with Qubes not being able to stop the clock check?

Myself and other “greats” recommend for you to start with “Qubes 4 Dummies”…

You can do a search for awesome-security at github and see more techniques.

Keep in mind that Qubes doesn’t really do anything to stop the OS running within each qube from performing any of its “normal” activity. For example, if a standard, baremetal Fedora installation would automatically try to connected to certain servers, then it will probably still do that when it’s running inside of a Qube, unless firewall rules or NetVM settings prevent it. Since you’re likely running several qubes simultaneously, I don’t think it’s terribly useful to look at all of the traffic coming out of Qubes OS as a whole to try to judge whether anything undesirable is happening in any of your qubes.

1 Like

Agree with @adw.

At high level, NTP traffic will come out of a stock Fedora qube if the qube has the clocksync service enabled when the qube starts. If the qube is also set as the ClockVM, dom0 and other qubes will poll it for the current time ever so often.

I recommended monitoring DNS requests as pulling the NTP servers usually indicates the operating system. For example, a Fedora qube will find servers using [X].fedora.pool.ntp.org host names. If you observed these DNS requests on your router, at least you know there’s a Fedora machine (or qube) somewhere.

In addition, it also matters how you’re executing your testing. For example, if you’ve had qube A set as your ClockVM, then change your ClockVM to (none), qube A will continue to attempt to synchronize with NTP servers until qube A is shut down or restarted.

Note, too, that your router might also be communicating with the outside world. So, depending on which interface you attached tcpdump to, some of the traffic might not be from your Qubes host at all. Tracking traffic through the net VM would identify everything coming from other VMs. So, you only need to look at the difference.

Data leaks | Qubes OS - did you know that the documentation already has a page for this topic?

Thanks everyone for your replies and insights. I will continue “testing” and researching how to do this properly, acknowledging that it might be impossible to detect anomalies as per @adw and @icequbes1 comments.

But this brings me to another question.

Several posts on this forum ask for deeper security exploration which may be beyond the scope of the Qubes OS project. These are 2 recent examples:

more-practical-security-for-qubes-and-more-realistic-threat-model

my-personal-experience-of-attempt-to-harden-qubes-vm

I approve of Qubes OS default configuration. Qubes OS is very, very, usable while very reasonably secure.

However, evil-maid or hacking through physical access remains a major problem. State-actor evil-maid, or major crypto-currency theft evil-maid, or business intelligence evil-maid, are “normal” threats for many. That’s why people choose to use Qubes OS. Anti-evil maid is not an option for most (No TPM 1.2; no hardware readily available for anti-evil maid setup). So I wonder if it’s possible to develop a guide to verify manually if a Qubes OS machine is contacting IPs that should be cause for concern. Just putting the question out there.

Thank you again for your replies and for this community.

2 Likes

Bear in mind that an Evil Maid attack does not necessarily involve any kind of network access. In fact, if I recall correctly, the original description of the attack involved only a removable storage device and was, in that sense, a completely “offline” attack.

I’m skeptical about the IP-address-based proposal, because it should be fairly trivial for a sophisticated adversary to exfiltrate data to ostensibly innocuous – or at least ambiguous – IP addresses (e.g., AWS servers).