Anti evil maid? coreboot? heads?

When I read the forums here, I often see the words anti evil maid, coreboot, and heads, but even after searching on Google, I can’t really figure out what they mean.
Could you please explain it in a way that even someone who is not familiar with computers can understand?

If you open ChatGPT and type: “explain meaning of the words anti evil maid, coreboot, and heads for a newbie” - you will get a nice short summary. :+1:

2 Likes

Here is an article explaining anti evil maid in detail: Anti Evil Maid | The Invisible Things
Antil evil maid is basically used to prevent someone, who has physical access to your computer, from compromising the integrity of the data on your computer. Anti evil maid does this by using another device to verify the integrity of dom0 (the core of Qubes).
You can also read more about anti evil maid here: Anti evil maid (AEM) | Qubes OS

Coreboot is a free, open source, and secure BIOS firmware to replace the often proprietary firmware that is generally seen as less secure.

2 Likes

HEADS isnt self-explanatory and you would have to add “HEADS BIOS” not to get results about encephalus. AEM and coreboot are easily searchable. What is important about these is that there are deeper parts to a computer than the OS (operating system). These deeper parts also have security issues.

2 Likes

Hi @regi, let me try:

AEM: uses the Intel ME and your TPM chip to verify that the files your computer used to boot haven’t changed. This is done by printing a specific message on screen you have to look for every time you boot.

Coreboot: is an Open-Source implementation of a BIOS that works for many computers. This is attractive because it’s code can be audited and it greatly reduces the amount of so-called “blobs” (proprietary binaries that cannot be audited).

Heads: is a so-called payload for Coreboot. It is booting into a tiny Linux kernel and is using your TPM chip to verify that the files your computer uses to boot have not been modified in addition to checking itself too. This is done by having you personally sign each and every change with private key that only you posses and that is secured in an external USB device (like Nitrokey, YubiKey or any other GPG compatible HSM). As a result you can be sure no tempering with your boot files goes unnoticed.

The core differences:

  • AEM depends on Intel ME and will work on almost all modern Intel-based computers
  • Coreboot supports a smallish subset of modern computers. In addition to the FOSS advantage this also allows you in most cases to disable the Intel ME and in some rare cases to even remove most of it.
  • Heads supports an ever smaller subset of modern computers but could be considered the gold standard in temper detection without relying on Intel ME

Intel ME is a proprietary solution that runs “below” your CPU and is invisible to it. Therefore no instruction detection system or virus scanner can see what it is doing. This combined with the fact that it’s proprietary complex code makes it a possible attack surface. Most people don’t need to worry about it, because such an exploit would be much too valuable to use on a random person. However targeted persons should consider this issue and make an informed decision which residual risk is acceptable to them.

I kept this intentionally high-level and left out some interesting but secondary facts. The aim was not to provide a complete or 100% correct summary but a conversational overview that gives the basic idea to an uninitiated person. Now, fire away and dismantle this post :wink:

7 Likes

AEM depends on Intel ME and will work on almost all modern Intel-based computers

AEM only supports TPM 1.2, which leaves out a lot of newer and
not-so-new machines.

Most people don’t need to worry about it, because such an exploit would be much too valuable to use on a random person.

The bar is getting lower

As a result you can be sure no tempering with your boot files goes unnoticed.

Depends on what you mean by ‘boot files’. / isn’t RO with Heads, so
an attacker who escapes Xen can persist by modifying
any services that run during dom0 boot.

1 Like

Here’s a ChatGPT4 answer

Sure, let’s break down these terms:

1. Anti Evil Maid (AEM)

Anti Evil Maid is a security feature designed to protect a computer from “evil maid” attacks. An “evil maid” attack involves an attacker who has physical access to a target computer (imagine a malicious hotel maid, hence the name). The attacker could tamper with the device to, for example, install malware or compromise the system in other ways.

  • Purpose: AEM aims to ensure that the system hasn’t been tampered with by verifying the integrity of the boot process.
  • How it Works: It typically uses Trusted Platform Module (TPM) technology to securely store cryptographic keys and to check the integrity of the boot process, ensuring that no unauthorized changes have been made to the system’s firmware or boot loader.

2. Coreboot

Coreboot is an open-source project that replaces the proprietary BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface) firmware found in most computers.

  • Purpose: Coreboot aims to initialize the hardware in a system (like processor, memory, hard drives, etc.) in a speedy and secure manner, and then pass control to a payload, such as a bootloader or an OS kernel.
  • Features:
    • Fast Boot Times: It’s designed to perform only the minimum number of tasks necessary to load and run the next stage of the boot process.
    • Flexibility: It can be paired with various payloads, like SeaBIOS, TianoCore (a UEFI implementation), or a Linux kernel, depending on the user’s need.
    • Security: It can be configured to provide a secure boot environment, ensuring that only signed, trusted firmware and software can be executed during the boot process.

3. Heads

Heads is a firmware framework (an open-source project) that combines the capabilities of Coreboot with security features, providing a secure, somewhat tamper-proof platform.

  • Purpose: Heads aims to ensure the integrity and privacy of a system through hardware and firmware security features.
  • Features:
    • Secure Boot: Heads uses cryptographic signatures to verify the integrity of the firmware and the boot process, ensuring that only trusted, signed code is executed.
    • User Control: Heads puts the user in control of their system’s security, allowing them to manage keys and decide which firmware and software are trusted.
    • Anti Evil Maid (AEM) Mechanism: Heads implements an AEM mechanism, using TPM to ensure that the firmware hasn’t been tampered with between boots.
  • How it Works: Heads will measure (hash and store) the firmware and boot configuration into the TPM. On subsequent boots, it will measure the firmware and configuration again and check it against the stored values, alerting the user if there are any discrepancies, which might indicate tampering.

In summary:

  • Anti Evil Maid is a mechanism to protect against physical tampering attacks.
  • Coreboot is an open-source alternative to proprietary BIOS/UEFI firmware, focusing on speed and flexibility.
  • Heads is a firmware framework that combines Coreboot’s capabilities with additional security features, including an AEM mechanism.

These technologies are often used in the context of securing the boot process of a computer, ensuring that the firmware and boot software have not been tampered with, and that the system boots into a known, secure state.

5 Likes

I am curious about this but from a different perspective…lets suppose you have a desktop that you built so it’s been “trusted” ever since first boot. Let’s also say it’s impossible for a physical attack, it’s in a locked room with an ethernet cable - nobody but you will physically interact with it.

In this scenario, the worst case would be…a 0-day exploit that jumps from an AppVM to Dom0 and overwrites /boot files for persistence - is that accurate? And I’m not asking about what is “theoretically” possible, I know it probably is, but more like statistically speaking?

1 Like

Like Heads to me seems like the setup you want more for a laptop, since you could leave that in a hotel room, or if you travel internationally and you are suspicious of customs agents, etc.

For a relatively regular person who probably doesn’t have to worry about law enforcement or some bad actor breaking into your home to install BIOS or /boot sector malware…a neutered Intel ME, coreboot, and Qubes is a huge upgrade over Windows or even regular Linux distros I imagine?

1 Like

I do not understand how Qube’s AEM is not the ‘gold standard’ when compared to Heads (though I’d much rather having normal UEFI secure boot than either of them).

As far as I understand…

Qubes AEM/Trenchboot:

  • BIOS/UEFI firmware is verified by Boot Guard.
  • BIOS/UEFI firmware loads in tboot/grub (neither of these are unverified because of the lack of UEFI secure boot support).
  • tboot/grub loads signed SINIT into TXT.
  • SINIT does the measurements the boot policies and launch environment and save the results in PCR 17 and 18. If there is tampering these PCRs should change.
  • The user can bind to more PCRs (I think, haven’t tested with Qubes AEM… works fine on normal Linux though), such as 0, 1, 2, 3, 4, 7… 0 is the most important one since it should change if there is a downgrade attack on the BIOS. PCR 0 is measured by BootGuard so a downgraded and subsequently attacked BIOS/UEFI Firmware can’t just lie about it. The rest are to make sure no one tampers with the firmware settings.
  • If all of the PCRs match, the TPM unseal the secret.

This seems okayish and I don’t see how it could be compromised without an exploit or someone sniffing on the user’s keystroke or the secret when it shows up on the screen.

Heads:

  • The firmware measure itself and send the measurements to some PCRs the TPM.
  • If the measurements match, the TPM unseal the secret.
  • The firmware does GPG verification of the files down the chain.

None of the existing laptops with Heads (all those Librems and ancient Thinkpads) have functioning Boot Guard, so there is nothing stopping the system from booting with malicious firmware flashed by an attacker. The firmware can just then lie about the measurements and everything down the chain is basically theatre.

So from my POV, there are a lot of concerns about bugs or potential backdoors in complex proprietary software when the open source solution is basically a wide open front door. I don’t see why targetted people should use Heads over AEM/Trenchboot unless there is also some broken logic going on with AEM/Trenchboot that I am not aware of.

1 Like

The Trenchboot project for Qubes seems to be a very new project, so it’s possible the community just hasn’t had enough experience with it to weigh which one is better. The below three links are good reads.

Thoughts dereferenced from the scratchpad noise. | TrenchBoot Anti Evil Maid for Qubes OS - Qubes Trench/AEM Phase 1: 01/31/23
Thoughts dereferenced from the scratchpad noise. | TrenchBoot Anti Evil Maid - Phase 2 - Qubes Trench/AEM Phase 2: 09/27/23
Thoughts dereferenced from the scratchpad noise. | TrenchBoot Anti Evil Maid - Phase 3 - Qubes Trench/AEM Phase 3: 01/12/24

1 Like

AEM basically uses the same logic as Trenchboot here, although the stuff it supports is much older. And even then, I do not understand how Heads is better than AEM.

I am not talking about the “experience” here either - I am talking about the logic. Heads has a very obvious circular logic.

1 Like

The logic is simply about threat models; to go the TXT / Boot Guard route means putting more trust into Big Tech, e.g. TXT won’t work with disabled / neutered ME; Boot Guard also makes it impossible to install your own firmware so you now have to trust some proprietary gigantic blobs that probably have both intentional and unintentional backdoors (from the perspective of people who strongly distrust Big Tech)…

For others, who have more trust in Big Tech, and a different threat model, the AEM / Trenchboot solutions may very well be better.

That’s just my limited, non-expert opinion on this, however.

1 Like

Why all the scare about Big Tech? With which threat model would it make sense to use Heads over AEM/Trenchboot?

Again, you are worried about magical backdoors when Heads has a wide open front door. You have no Boot Guard so now an attacker can just flash malicious firmware that lies about the measurements and it’s game over. Who needs a backdoor or an exploit? I really don’t get what you mean by “the logic is about threat models”. There is a very obvious way to just bypass all of your protections as I just described.

Also, the CPU manufacturer is always implicitly trusted anyways. They can backdoor you with or without the ME. Without Boot Guard how are you even gonna have any static root of trust?

1 Like

I’m very certain when I say that we’re all eagerly awaiting your proof of concept. You might even win some kind of prize or land a big job as a result of that (not kidding).

Not fully, this is why (some) people like disabling the part of it that’s literally built like a gigantic back door (Intel ME). An analogy: just because you have to have some trust in the maker of your iphone, doesn’t mean that it can’t make sense to remove all of its microphones (take it up with joanna if you disagree).

I mean honestly I kinda feel like you’re trolling, but as I said: provide us with a proof of concept (i.e. real, working attack) against e.g. NV41 running NK Heads 2.4 with all features enabled. Compromise it without access to the dongle and without the user being able to detect the compromise later.

1 Like

I’m very certain when I say that we’re all eagerly awaiting your proof of concept. You might even win some kind of prize or land a big job as a result of that (not kidding).

I mean honestly I kinda feel like you’re trolling, but as I said: provide us with a proof of concept (i.e. real, working attack) against e.g. NV41 running NK Heads 2.4 with all features enabled. Compromise it without access to the dongle and without the user being able to detect the compromise later.

Why do you even need a POC? The logic is extremely simple.

  • Take the BIOS dump with a programmer.
  • See what the result of the measurements would be. You know what measurement it is running it would be would you not?
  • Use the same programmer and flash a piece of firmware that will just send whatever measurement the TPM expects and done.

Just because no one has bothered to write a POC doesn’t mean the logic isn’t broken from the beginning. That’s like saying, I will develop a new protection system with obvious circular logic and it must be secure because no one will bother to write a POC for it. In what world would that make any sense?

Not fully, this is why (some) people like disabling the part of it that’s literally built like a gigantic back door (Intel ME). An analogy: just because you have to have some trust in the maker of your iphone, doesn’t mean that it can’t make sense to remove all of its microphones (take it up with joanna if you disagree).

  • They can just straight up backdoor you with another Spectre variant if they really wanted to. Disabling stuff only makes sense in the context of attack surface reduction, it doesn’t stop backdoors. Your BIOS/Kernel can’t stop it. That’s why full spectre/meltdown mitigation requires microcode updates from Intel.
  • There is a massive difference between phones and your average Linux desktop. On phones, you have proper permission control for per-app microphone access, and apps are strictly sandboxed. Android also has a global microphone toggle. It only ever makes sense to remove the mic on a phone if the assumption is that the entire OS is somehow compromised. It makes a lot more sense to remove the mic on a Linux computer because you don’t have any permission control to begin with and apps can freely snoop on you as they see fit so long as they get access to your PulseAudio socket.

Compromise it without access to the dongle and without the user being able to detect the compromise later.

What? Are you going to use nail polish now?

1 Like

How is that going to give you a better root of trust?

You are building it on closed source firmware, signed with oem keys you don’t know who has access to, while running ME that has a long history of having serious security issues.

People who can waltz in and out of houses with eprom programmers and backdoor firmware, probably also know how to get the oem keys and bypass boot guard.

1 Like

How is that going to give you a better root of trust?

How are you going to have a root of trust to begin with? How are you gonna implement this?

You don’t even have a root of trust now without Boot Guard or SINIT to begin with, and I don’t see how you can even implement it.

You are building it on closed source firmware, signed with oem keys you don’t know who has access to, while running ME that has a long history of having serious security issues.

That’s why DRTM exists - to reduce trust in the OEM’s firmware. SINIT is what’s doing the measurements inside TXT and write to PCR 17 and 18, the normal UEFI firmware cannot directly write to those PCRs. It needs to somehow break SMM then override TXT or something to be able to do that.

Oh and guess what? SINIT is proprietary Intel stuff, signed with their key.

People who can waltz in and out of houses with eprom programmers and backdoor firmware, probably also know how to get the oem keys and bypass boot guard.

Making or forking existing open source firmware then flashing it with a programmer is easier than stealing a private key, chief.

1 Like

In the world where just because you technically can land humans on Mars, doesn’t mean that it’s easy to do. If it’s as trivial as you say to create a compromised coreboot/Heads firmware that will lie about its measurements, while also smoothly handling all other interaction (lots more lying involved, e.g. if I were to tell it to flash my last flashed firmware, because the case looked fishy it needs to play along all the way), then develop a PoC; Heads also measures things like the LUKS header and other data that needs to be examined on site, so now we’re talking about the attacker building the compromised firmware on the fly as well.

No one claimed that attacks are impossible, but Heads sure makes it much harder to compromise a system compared to using a “regular” BIOS.

Regarding how it compares to trenchboot etc. I already said that it depends on the threat model, e.g. if the attacker has an ME exploit they might compromise your entire system without even needing physical access to it at all.

You’re using a lot of hand-waving in your arguments, e.g. “They can just straight up backdoor you with another Spectre variant” as if that’s a trivial thing, too. QubesOS has Hyperthreading disabled by default and provides extensive protections against these kinds of attacks; it’s by no means trivial and the mentioned measures demonstrate that mitigation aside from manufacturer updates is possible; if the CPU microcode was open source then even the community could provide updates, including for older models the manufacturer doesn’t feel like supporting anymore.
Further, any weakness in the “supposed-to-be-secure” part of the hardware, as opposed to the literal back door that the ME is, that Intel may include, will weaken that company’s position on the market and not be worth it unless you’re the spy of the century or something. Meanwhile, extensive cooperation between big corp and the government is well documented and there’s enough corrupt people there that I wouldn’t exclude the possibility of them selling secrets to criminals, either; and that’s just what they can do voluntarily.

Then there’s the question of power and who to give it to; even if closed-source alternatives are better in some cases, choosing open source can still make sense to provide more alternatives to others (if you contribute, that is). Freedom includes the freedom to know what software is running on your system and exactly what it does; in practical terms this may not always bring an advantage, but enough people are nevertheless making the deliberate choice to support these kinds of initiatives.
One obvious advantage is that the code can be audited (including by you) and thus weaknesses found; with closed-source you have to trust that they internally audit their code enough, which history has shown to be hit and miss at best.
One of the people I’m referencing here is the original creator of QubesOS, who did take out all the microphones of her iphone (only connecting an external BT handset when making calls), precisely because this blind trust in Big Tech is not warranted. You seem to be of a different opinion, but present it in a rather biased manner; hardware root of trust is certainly nice to have, but only if it can actually be trusted. And this isn’t a dialectic about “either you must fully trust or fully distrust”…I’ve given several examples of how partial trust can make sense and how experts agree on that.

Not if you own that key or your buddy does.

2 Likes

If it’s as trivial as you say to create a compromised coreboot/Heads firmware that will lie about its measurements, while also smoothly handling all other interaction (lots more lying involved, e.g. if I were to tell it to flash my last flashed firmware, because the case looked fishy it needs to play along all the way), then develop a PoC; Heads also measures things like the LUKS header and other data that needs to be examined on site, so now we’re talking about the attacker building the compromised firmware on the fly as well.

What? Just fork it and change the measurements part. You are not gonna notice it until it’s too late. Do you manually check your firmware every boot?

No one claimed that attacks are impossible, but Heads sure makes it much harder to compromise a system compared to using a “regular” BIOS.

How is it harder and not easier than standard UEFI firmware with Boot Guard? Do you have an actual technical reason to believe Heads is better than Trenchboot with proprietary firmware, or is open-source the only reason?

Regarding how it compares to trenchboot etc. I already said that it depends on the threat model, e.g. if the attacker has an ME exploit they might compromise your entire system without even needing physical access to it at all.

How does one launch a remote attack against the ME without already being root/administrator on a system?

QubesOS has Hyperthreading disabled by default and provides extensive protections against these kinds of attacks

You realize SMT is not the only speculative execution there is right? I wonder what the bunch of kernel args for mitigations do :thinking:

Also, if disabling SMT solves Spectre/Meltdown, then who needs Microcode updates eh? :nerd_face:

Apple silicon doesn’t even have SMT to begin with and they still have the same type of vulnerability.

it’s by no means trivial and the mentioned measures demonstrate that mitigation aside from manufacturer updates is possible

Who has demonstrated that they can fully mitigate Spectre/Meltdown without Intel’s microcode updates?

Further, any weakness in the “supposed-to-be-secure” part of the hardware, as opposed to the literal back door that the ME is, that Intel may include, will weaken that company’s position on the market and not be worth it unless you’re the spy of the century or something. Meanwhile, extensive cooperation between big corp and the government is well documented and there’s enough corrupt people there that I wouldn’t exclude the possibility of them selling secrets to criminals, either; and that’s just what they can do voluntarily.

Using open source boot firmware is not going to save you from Intel if they are kahoots with the government. They are still part of the TCB.

Freedom includes the freedom to know what software is running on your system and exactly what it does; in practical terms this may not always bring an advantage, but enough people are nevertheless making the deliberate choice to support these kinds of initiatives.

How do you know the firmware has not been tampered with?

Not if you own that key or your buddy does.

See, that’s the problem. This not not a situation where you setup Boot Guard (yet another Proprietary Intel tech!!! :money_mouth_face:) to only trust your key and sign your firmware with it. This is a situation where, you can sign all your stuff all you want, but an attacker can just flash right over it and you wouldn’t notice until it’s too late.

The attacker doesn’t even need to sign my their firmware or malicious Xen/kernel/initramfs with your PGP key. You are just gonna boot, see that your NitroKey flashes green or that the TOTP code matches (the TPM will unseal the secret because the firmware is lying about the measurement), type in your encryption password, and it is game over.

You own the keys but your keys don’t even matter to begin with.

1 Like