Resetting hardware state after nation state level attack

Imagine that your system was compromised thru a qube via the network connection. Somehow, that compromise affected all booted qubes connected to the same IP.
Dom0 boot hashes are also throwing compromise warnings.

Somehow dom0 was compromised thru a network attached qube, or thru an update, or another vector I haven’t considered.

The machine was never physically accessed. The malware came thru the network.

Okay, with that in mind, if you’re resetting the machine and ensuring it doesn’t get re-infected, what steps would you have to take?

Would all USBs and/or external drives be considered compromised?
Is there anything that wouldn’t be removed by flashing the bios, and formatting, and re-installing the OS?

If data needs to be recovered off a potentially compromised Qube or drive, how would you do it, without re-infecting the device?

Lastly, what network monitor would you use to keep running in the background that would give early warnings of any potential unauthorized data leak.

I would not trust the machine.
It would have to go.

There are examples of malware attacks on BIOS chips and firmware,
affecting various components.
In this scenario anything is possible, and nothing trusted.

I would not trust that drive.
There are methods to mitigate the risk from the drive by buffering, etc,
and it is relatively easy to get data from a Qubes disk, but it would
be better not to do so.

Do you have backups? Can they be trusted?
Was backup medium accessed after the compromise?
Was there a single compromise?

This is a rabbit hole you can go down as deep as you will.

A network monitor can be placed at any gateway point to detect data
leaks, but it will be limited in scope. What you use will be determined
by your capabilities.
You can install a host based IDS, but you will need to disable
passwordless root (obviously), and deal with multiple warning streams.
It isn’t easy and is not suitable for most users.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.

Well, in this part of the world, I won’t have access to a new machine anytime soon.
I have to do my best to reset this one, until I can get to ‘civilization’.

I have backups, and a relatively accurate time of the compromise.

I plan on segmenting qubes more tightly, and fire-walling them to only the IPs they need for the software in the qube. Basically I’m going to lock down all outputs.

What I need right now is a relatively simple way to monitor outgoing traffic, and give me early warnings if data is leaking.

What are the pros using?



Always good to do.
You can modify the nftable for OUTPUT in the qube, as well as securing
at the netvm level.

There is no easy way to do this - you need to monitor before each
gateway. If you use VPN or Tor, you must take this in to account.
After each netvm traffic will be identified as originating from the
netvm, so you cannot use originating IP.

If you are serious about this, then restrict your use of Qubes to
specific addresses. (I mean, do not use it for general activities.) That
will enable you to simply observe data leaks, if you trust the chosen
end points.
More likely you want to use Qubes for general use, like browsing, at the
same time as using it for specific activities. This makes it more
difficult to monitor and secure outgoing traffic. (How will you know
that an attacker has not compromised a qube, and is exfiltrating data
via another qube, hidden in ordinary browsing activity?)

Depending on your assessment of your needs and the threat, you will act accordingly.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.

Agree with @unman here, import a replacement and move on.

Before you do that, hunt :bow_and_arrow: for your implant :face_in_clouds:, might be some $$$ :money_mouth_face: to help fund your replacement.

AFTER, you find your implant and/or actual evidence of said alleged breach, then and only then can your paranoia be confirmed and worthwhile forward direction be found. Anything else is merely “pissing in the wind”.

Re: paranoia

“Just because you’re not paranoid, this doesn’t mean they’re not watching you.”

“Pros”? If you mean corpo world, changes every year so, ask Gartner :poop: :clown_face: :poop:. More than likely, these tools are outside of both your budget & wheelhouse.

Product != help you.

An OODA loop of: Planning → Process → Post-Incident Learning → Prevention, will.

To begin analyzing your traffic, installing OpenSnitch on the hosts you imagine to be compromised would be a good start for you as, it has a friendly GUI and, will ask you to manually create an allow rule for any packets looking to leave your node.

“Globally” (traffic from all qubes), it would be worth taking a look at/tuning/using the Suricata based sys-ips by @Sname.

off-topic banter

My wheelhouse is “plastic”… so where/what do I need to learn to start?

off-topic banter

Apologies for the slang (still learning), Your wheelhouse (3.b) is “plastic” (1)? Or, your budget is “plastic” (2)"?

That depends on your starting-point. What’s your current skill-set? Prefer packets or, big on binaries? There’s LOTS of free training available these days (LMDDGTFY) & LOADS of conferences & white-papers to lean on as well.

That said, nothing trumps experience, gained or bought.

FWIW, paranoia is NOT a characteristic of sound security practice/posture, it’s the result of.

Re: Your "hunt"

Define this (for yourself) → Find out the most common APT operators in said region → Search for their ICs of TTPs → Identify on your system(s).

I would want to know how it happened before I could begin to trust the machine again.

1 Like

I had several qubes running. The qubes that were running thru a specific VPN->VPN->Tor chain were all taken down. Rebooting the qubes, the software was requesting re-entering the creds (key logger attack).

Rebooting the root machine, the bootloader failed hash verifications. How did they get access to the bootloader thru a compromised qube?

The qubes on other IPs were unaffected. The attack started with an IP that was most likely connected to a comms point that is less than kosher with the nation state in question. There was no profit motive as near as I can tell. Doxing I suspect was the motive.

Moderation note

Hi @Emily, technically this thread of yours is on-topic in User Support and doesn’t have to be in All around Qubes. Let me know if you want me to move it there or leave it here. The search engines won’t pick it up here, but I don’t think this gives any protection.

The qubes running Tor/VPN or the qubes using a Tor/VPN VM for network access?

Maybe they managed to break out of the VM?

Answer to moderation note

Okay, if that’s a more appropriate category.

Root network is sys-net. That then chains with sys-vpn1. That chains to sys-vpn2. That chains to sys-whonix.
Edit: sys-net → sys-firewall (but you knew that.)

Re: dom0: Well, boot loader hashes changed after the exploit. There is a possibility a ran a dom0 update in the background, and just forgot to note it. Other then an update, or an update attack of some kind, I have no clue as to how they broke out of the VM.

Perhaps they didn’t, and just targeted all the VMs connected to that IP.

The common denominator is the compromised qubes shared the same IP, were loaded, and all had flatpak apps.

There might have been a compromise thru flatpak… and never escaped the VM sandbox. Attacker just might have targeted machines connected to the IP (Tor exit node) with a flatpak fingerprint.

I’m going to test both these solutions.
Just a quick question:
If OpenSnitch is installed in sys-firewall, that would cover all traffic thru all qubes right?
Would doing that potentially open a security hold via an OpenSnitch exploit?

And you couldn’t use something like sudo netstat -tupa to see which process was connected to that IP address?

Yeah. But did those 3 crash (at the same time) or did a VM (or more) crash which was connected to one of those?

You could also break out of the VM by Xen/Qubes/RAM/CPU bugs. No need for an update attack. But of course it still could be an update attack.

This post was flagged by the community and is temporarily hidden.

IMO, this is a bit insane for a “typical” (non-honeypot) computing scenario; at a minimum how is sys-firewall not in there?

Something like:

Or, something like:


Architecture for a honeypot might look like this:

Do you realize how this architecture design impacts your security “goals”? Unless you’ve rebuilt a custom sys-net, sys-net has been built using the default, full/desktop of either debian or fedora and is connected to your LAN uplink (no idea what protection this may or may not provide) which, seems to be the case for many happy Qubes users. However, when you couple this with the architecture you used, you potentially open easily avoidable attack vectors.


Let’s say you’re browsing the dodgiest of dodgy sites (because you’re using tor and that’s all it’s good for :roll_eyes:) and somehow, someway, some type of RCE or client-side + LPE exploit-chain exists in any of the applications/services which you are using/may be running/connecting via with a full distro …

With this an adversary may want to begin running a service of their own on port 31337 on your node. At a minimum, this service would be easily accessible from you LAN (the network your sys-net connects to as an uplink). Should your adversary be of “boogie man” status and, they were able to traverse sys-whonixsys-vpn2sys-vpn1, how LOVELY of you to have “left the door open”!

Whereas, even adding sys-firewall (simple L3) with default deny rule implemented would at a minimum protect you from an attacker on your LAN accessing the listener on port 31337. That said, NOTHING is “safe”, until you make it so to your standard.

Even users who chose to create/use a sys-pihole, would have had to defend agaist vulns like this:

It could, should it be applied as the NetVM Qubes-wide but, this was not the intent of my suggestion. In it’s default configuration you’d lose the fine-grained traffic control/identification which OpenSnitch provides and you’re seeking. I made some progress with a “proper” sys-OpenSnitch but, it’s nowhere ready for prime-time. Besides, nobody tests/uses any of my other salt stuff. :person_shrugging:

To be clear, you would want to identify any abnormal traffic flowing in/out of you questionable hosts. To do this, you would want to install the quasi-L7 OpenSnitch firewall directly to the host. Yes, writing to this system is forensically unsound so, if you plan to perform forensics on the disc, be sure to make forensically sound backup of the system/disc in question.

What’s a “security hold”?

1 Like

It’s not very likely that an update attack would allow you to break out of the VM. It can be used as the initial attack vector, but you could still need a Xen or CPU exploit to break out of the VM.

1 Like