Anti evil maid? coreboot? heads?

Now you want me to develop your PoC? Lol…again…it’s you who’s claiming that it’s sooooo easy (and handwaving all objections), so go ahead and show us how easy it is. Reflashing the firmware I last installed from a ready made usb stick takes a few minutes, so yes, I can do that if I think that I need to. Any proper compromise would have to play along with this and any other checks I may want to do.

I didn’t claim that it’s better, it’s you who claimed that it’s trash, but you refuse to back up your big mouth with proof. If I controlled the private keys I could sign anything I wanted. Why do you want to trust some rich Westerner who probably rapes children on the weekends on his private island?

Given that it’s closed source and has access to the network, how do you know that one has to be admin?

That’s a misrepresentation of my argument; I didn’t claim that disabling SMT fully mitigates Spectre-type attacks; you were making it seem as if only the manufacture update can save us when in reality there are other ways to mitigate as well; and again, if the code was open source one wouldn’t even need to beg the manufacturer for updates to begin with.

I didn’t say it will, but it’s a step in the right direction; furthermore, as I’ve pointed out, unless you’re some superspy Intel won’t just compromise their products…giving the government or criminals access to existing back doors, however, is another matter.

I don’t, but I can examine the code (I’ve reviewed parts of it) and others, who then can talk about that since they didn’t sign an NDA, can, too. Whether the “open-source-review” ends up being better than some arbitrary “close-source-review” depends on many factors, but with the former at least I can influence the process.

In my understanding you can’t sign arbitrary firmware using Boot Guard; that’s actually the point of the technology, so it’s not actually “your firmware”, it’s someone else’s firmware whose code you can’t even review that is bound to have all sorts of bugs, as all software does.

You keep repeating this like some mantra without actually demonstrating your “super-easy” compromise solution…this is not a productive conversation.

Besides, I could make the same triviality argument and claim that some organization has bought signatures or ME access or other exploits of the millions of lines of non-reviewable (by you) code that you’re executing - it could even be considered a business model - and say that this is how you’re going to be compromised.

In the ideal world there would be actually trusted hardware with a hardware root of trust and open source firmware running open source software; although if we’re talking about ideal worlds then there’s many other things that would be better far removed from IT issues.

As I said - it’s a matter of threat models, which includes an assessment of who you trust and how much.

2 Likes

Now you want me to develop your PoC? Lol…again…it’s you who’s claiming that it’s sooooo easy (and handwaving all objections), so go ahead and show us how easy it is. Reflashing the firmware I last installed from a ready made usb stick takes a few minutes, so yes, I can do that if I think that I need to. Any proper compromise would have to play along with this and any other checks I may want to do.

What kind of logic is this? If I say X11 is insecure because it allows apps to snoop on each other (it does), do you really want me to write a POC to record your screen and upload it to my computer before you believe it?

I can do that if I think that I need to.

So if you just boot it without noticing you are immediately compromised? What is this if not theatre?

Any proper compromise would have to play along with this and any other checks I may want to do.

Flashing from your USB stick or even a programmer won’t save you. The malicious firmware doesn’t even need to gain persistence, it can just let you reflash your “Good” firmware all you want. The next step in the process is you doing a reset, enroll your own key and sign all of the boot stuff. Are you gonna mount the drive and manually check the GPG signature on a separate computer every time, or are you gonna blindly sign them? If you sign them blindly you are done for. Considering that you are talking about flashing using your USB stick, I think it’s safe to say that you are just blindly signing them. And how are you gonna “trust” the computer you are using to check and flash stuff?

I didn’t claim that it’s better, it’s you who claimed that it’s trash, but you refuse to back up your big mouth with proof.

I literally pointed out the circular logic. You are checking if the firmware is malicious or not by trusting the firmware to not lie to you about the measurements. That’s theatre.

If I controlled the private keys I could sign anything I wanted. Why do you want to trust some rich Westerner who probably rapes children on the weekends on his private island?

Your signing doesn’t matter at all. What’s up with all the rapist shit? This is getting very vile bro.

Given that it’s closed source and has access to the network, how do you know that one has to be admin?

What magical logic is this. You can just disable AMT. If you think Intel is so malicious they ignore your setting to disable AMT and just straight up connect to the internet anyways, you have bigger problems and the open source boot firmware won’t save you.

you were making it seem as if only the manufacture update can save us when in reality there are other ways to mitigate as well;

Because that is the case. How are you gonna have full mitigations without the microcode released by the vendor???

I didn’t say it will, but it’s a step in the right direction

It’s a step backwards because now a non government attacker can still pwn you. Your logic is literally “I can’t trust the OEM to not leak their Boot Guard keys, therefore I am not going to have Boot Guard protecting the firmware at all!!!”.

I don’t, but I can examine the code (I’ve reviewed parts of it) and others, who then can talk about that since they didn’t sign an NDA, can, too. Whether the “open-source-review” ends up being better than some arbitrary “close-source-review” depends on many factors, but with the former at least I can influence the process.

None of this will save you from an attacker who just flashed malicious firmware.

In my understanding you can’t sign arbitrary firmware using Boot Guard; that’s actually the point of the technology, so it’s not actually “your firmware”, it’s someone else’s firmware whose code you can’t even review that is bound to have all sorts of bugs, as all software does.

The OEM can sign whatever they want. You just need an OEM with the “good firmware” you love so much to sign the stuff. Alternatively, get the Dasharo Coreboot + EDK II laptops which don’t have Boot Guard set up, enroll your key then sign whatever firmware you want.

That is less insane than not having Boot Guard at all.

Besides, I could make the same triviality argument and claim that some organization has bought signatures or ME access or other exploits of the millions of lines of non-reviewable (by you) code that you’re executing - it could even be considered a business model - and say that this is how you’re going to be compromised.

One needs an exploit, the other has broken logic from the beginning so no exploit would be needed.

As I said - it’s a matter of threat models, which includes an assessment of who you trust and how much.

No it’s not. One still needs a backdoor or exploit to be compromised, the other one needs neither of those.

1 Like

Moderation note: By the time replies get personal, they also become a lot less useful for anybody else.

Remember that inflamatory statements don’t always require anyone to reply to them, and sometimes it’s better for the conversation to agree to disagree or just give up.

I’ve set a slow-motion mode for a little bit to encourage everyone to reflect on where to take this conversation, hoping that everyone can see the broader interest of the community beyond their own personal beliefs, outrage or opinions.

2 Likes

It’s very simple logic: I am claiming that Heads increases protection by non-trivial amounts, while you are claiming it’s trivial. If it’s so trivial it should only take you an hour or so to whip up a compromised coreboot+Heads with the required characteristics. Since you and I both know that you can’t, you keep trying to deflect. As for X11, do you know how joanna made her point about X11 making root pw sniffing easy? She provided a PoC.

Why wouldn’t I notice? This is just more of your shtick about the oh-so trivial own that you can’t demonstrate. Again, the point of Heads isn’t to make an undetected attack impossible but to significantly raise the bar. The fact that you keep spending time replying here instead of providing the PoC and be done proving your point, is one more piece of evidence that Heads succeeds in this.

There is no circular logic. Provide the formal syllogism that shows it’s circular. You will either have to assume things I didn’t claim or commit logical fallacies.

I’m talking about signing with the private key that you, with your Boot Guard etc., trust ultimately, which is out of your control and owned by people who care not one bit about you. I’m not talking about my keys.

That’s your argument? I told you already, this is not a dialectic. Just because Intel won’t take heavy losses from introducing another Spectre doesn’t mean they won’t allow use of the literally pre-installed backdoors, which comes at little cost to them.

Again you’re ignoring my arguments…it doesn’t take a full mitigation to achieve security and it was you who claimed that we need to have the manufacturer intervene, as if no other approach is possible.

You think your magical boot guard protects you fully from non-government actors? You think the code you run has no bugs? Are you really that naive? And this is, yet again, a misrepresentation of my argument; I explicitly mentioned non-intentional compromise as well as the issue with software always being buggy to some extent. If your position is so strong, why do you have to keep misrepresenting my arguments?

This is like your prayer or something…repeating things doesn’t make them true, didn’t anyone tell you that?

So having to develop a compromised fork of coreboot + Heads while also having to build the correct variant on the fly is not an exploit? Leaving semantics aside, “exploit” here would stand for something non-trivial that needs time and resources spent; so, if it’s not even an “exploit” in that sense, then demonstrate it.

No it doesn’t. The backdoor is already in place and the bugs don’t go away just because someone signed something somewhere.

In any case, as moderation has now also stepped in, I don’t see the point of further discussion. Your entire position rests on asserting something as trivial that you can’t even demonstrate, while constantly misrepresenting my arguments. I don’t think new information that is useful to the community will be revealed through further engagement.

Of course, you providing a PoC would be beneficial to the community, but you and I both know you won’t, because you can’t, because it’s not actually trivial.

2 Likes

It’s very simple logic: I am claiming that Heads increases protection by non-trivial amounts, while you are claiming it’s trivial. If it’s so trivial it should only take you an hour or so to whip up a compromised coreboot+Heads with the required characteristics. Since you and I both know that you can’t, you keep trying to deflect.

I don’t need to be able to lay eggs to know if an egg is good or bad. I could spend my time writing up an attack, but why would I?

Again, the idea for the attack is extremely simple and it’s only a matter of implementing it, not actual security research because there is no need for an exploit at all.

  • Get the BIOS dump with a programmer.
  • See what the results are supposed to be with the dump. What is being measured here is public so you can figure out wha the values are supposed to be.
  • Just remove the measurement code and then lie about what the measurement results are.

I don’t see how this is rocket science.

As for X11, do you know how joanna made her point about X11 making root pw sniffing easy? She provided a PoC.

Her POC is literally just a program made by someone so she can easily demonstrate out insecure X11 is. She could have not even mentioned it and it still doesn’t make it any less true because the weakness is common knowledge.

If you asked me how to attack Boot Guard, I wouldn’t even have a vague idea of how to. I can’t just magically get the private key of my OEM (unless it’s some jackwagon OEM like MSI), and I can’t just tamper with the BIOS willy nilly because BootGuard will stop me. Changing BIOS settings using a programmer will still result in PCR 1 changing (unless the BIOS itself is bad and doesn’t report the important changes). Downgrading the BIOS version will result in PCR 0 changing. Attacking this stuff requires actual research into the implementation of these firmware and exploitation (the kind that Binarly does). I cannot just sit here and point out something very obviously wrong with the logic like I do with Heads.

Why wouldn’t I notice? This is just more of your shtick about the oh-so trivial own that you can’t demonstrate. Again, the point of Heads isn’t to make an undetected attack impossible but to significantly raise the bar. The fact that you keep spending time replying here instead of providing the PoC and be done proving your point, is one more piece of evidence that Heads succeeds in this.

This is just a nonsense personal attack at this point. Coding the piece of malicious firmware takes a bit of work and time, but the actual flashing is about as hard as flashing any regular BIOS. You wouldn’t notice.

If you think that you will magically notice tampering with your eyes (checking your nail polish or whatever), then you don’t even need Heads to begin with. Just use Coreboot + SeaBIOS. You can reflash them everytime you “think” tampering has happened, right? Why even need the whole fancy HOTP/TOTP stuff. Such complexity.

There is no circular logic. Provide the formal syllogism that shows it’s circular. You will either have to assume things I didn’t claim or commit logical fallacies.

How many times do I need to explain this?

  • Head’s entire point is to check if itself or the boot files have been tampered with by doing the measurements and send the measurements.
  • If the measurement matches, the TPM unseal the secret, and that is the basis for your HOTP/TOTP.
  • Nothing stopping an attacker from flashing malicious firmware.
  • Nothing stopping the malicious firmware from lying.
  • What is being measured can be found out by looking at the source.

Essentially, you are trying to make sure the firmware isn’t malicious by checking the measurement it gives you. But nothing is stopping it from lying. That is the circular logic.

In a normal sane UEFI computer, the “Nothing stopping an attacker from flashing malicious firmware.” is solved by Boot Guard preventing the computer from booting if the signature doesn’t match what it expects. Trying to downgrade the firmware version will mess up PCR0 because BootGuard will report the changes in PCR0 (which the firmware itself can’t change). Trying to mess with the firmware settings (not protected by BootGuard) will result in PCR1 being changed by the firmware itself. You can bind Bitlocker or systemd-cryptenroll to these PCR for real tamper detection. No weird circular logic here.

  • The attacker cannot flash their own malicious firmware.
  • Even if an attacker fully compromise an older version of your firmware and make it lie about the measurement of PCR1 and above, they cannot make it lie to PCR 0 because it cannot edit it.
  • Certain vendors like Dell blow fuses after a security update, so the attacker can’t even downgrade to begin with.

With stuff like AEM/Trenchboot, SINIT (signed by Intel) stores the measurements in PCR 17-18. If you try to tamper with boot files, boot policies or SINIT itself those PCR will change. I am pretty sure you cannot just load unsigned code into intel TXT either.

I do not see anything wrong in the logic with normal UEFI secure boot or AEM/Trenchboot. Obviously, I could just be missing something, but you have not pointed out what’s actually wrong with the logic and just say it’s “proprietary”. That is not a technical argument.

I’m talking about signing with the private key that you, with your Boot Guard etc., trust ultimately, which is out of your control and owned by people who care not one bit about you. I’m not talking about my keys.

You don’t even have a root of trust with Heads to begin with. What’s worse? A root of trust with keys controlled by other people, or no root of trust at all?

Heads literally tries to put the root of trust in the boot block, but said boot clock is not verified or write protected by anything. So effectively you have nothing.

That’s your argument? I told you already, this is not a dialectic. Just because Intel won’t take heavy losses from introducing another Spectre doesn’t mean they won’t allow use of the literally pre-installed backdoors , which comes at little cost to them.

What preinstalled backdoor?

Again you’re ignoring my arguments…it doesn’t take a full mitigation to achieve security and it was you who claimed that we need to have the manufacturer intervene, as if no other approach is possible.

What kind of argument is this? So you are saying that your computer with known Spectre vulnerabilities is reasonably secure? By that logic, why don’t you remove microcode updates from dom0 too. Qubes installs it by default. So proprietary! Must be a backdoor right there! So sus!

You think your magical boot guard protects you fully from non-government actors? You think the code you run has no bugs? Are you really that naive? And this is, yet again, a misrepresentation of my argument; I explicitly mentioned non-intentional compromise as well as the issue with software always being buggy to some extent. If your position is so strong, why do you have to keep misrepresenting my arguments?

This is a strawman argument. The logic with Boot Guard isn’t completely insane, so an attacker needs to implement a real backdoor or find an exploit (like Binarly did!) to actually compromise a computer with it. No one said the code I run had no bugs, lol.

The point here is that you don’t even need a real backdoor or an exploit because the logic with Heads is so broken that an attacker can attack you without them.

This is like your prayer or something…repeating things doesn’t make them true, didn’t anyone tell you that?

Ah, another personal attack with no technical merits. Classic.

No it doesn’t. The backdoor is already in place and the bugs don’t go away just because someone signed something somewhere.

What backdoor?

In any case, as moderation has now also stepped in, I don’t see the point of further discussion.

Good. Stop stating your opinions over and over like you accused me of and actual give technical arguments.

Your entire position rests on asserting something as trivial that you can’t even demonstrate, while constantly misrepresenting my arguments.

You still have not explained why trusting the boot firmware to tell you that it is not malicious is not circular logic.

1 Like

Because you keep making baseless claims and a quick PoC (not so quick anymore now) could have put the issue to rest. You just have a big mouth and extremely biased view and nothing to back it up with.

Finding bugs or writing other exploits is also not rocket science and take time and resources all the same. I already gave you an analogy: traveling to Mars; just because it’s clear that it’s theoretically possible and how the general approach would be implemented, doesn’t mean it’s a trivial task. There may be less certainty with the approach of compromising Heads (though even there you can’t have 100% certainty, as you don’t know how the user will validate exactly), but it’s still a lot of effort.

Wrong. Her PoC was provided by herself, because she actually knows what she is talking about:

Why PolKit is an idiocy? Do a simple experiment: start ‘xinput test’ in one xterm, running as user, then open some app that uses PolKit and asks for root password, e.g. gpk-update-viewer – observe how all the keystrokes with root password you enter into the “secure” PolKit dialog box can
be seen by the xinput program…

Your claim is that Heads is trivial to compromise and that it’s “security theater”. Prove it.

Here’s an idea: since there isn’t usually any machine-to-user authentication, just open the case, switch out the mainboard+CPU for compromised ones and seal it back up. This will be quicker than having to analyse output of Heads measurements on the fly by far and could even be combined with a relay attack. Of course the actual implementation would need to work on the details, but you obviously don’t care about such “trivialities”.

Here’s another idea:

Since the Key Manifest (KM) hash in FPFs cannot be rewritten, for those 4 products the KEYM (hence Intel Boot Guard) is compromised forever, meaning in turn that Boot Guard can be easily bypassed on those devices: these systems should be considered as Boot Guard disabled systems.

Now those keys likely were part of a debug build or for future release so it likely didn’t affect client production systems, but this is exactly one of the many problems of putting ultimate trust in some people you don’t even know.

Wrong again. You need to actually analyze the system to determine the PCRs and modifications and then craft your compromised version accordingly; this is long, difficult work and likely would require opening up the machine twice on separate occasions, hoping that the firmware hasn’t changed in between. In fact, as I’ve pointed out, swapping out the hardware would be easier, especially if there is no machine-to-user authentication, but Heads does provide it.

Why would I make it easier for the attacker? HOTP/TOPT makes it harder to compromise such that I wouldn’t notice, because it forces the attacker to learn a lot more about the system first, including things the attacker can’t know without getting physical access and opening up the machine. You keep assuming the triviality of the compromise without being willing or able to prove it.

That’s not circular; you prove circularity in logic by showing that one or all of the premises are also the conclusion. My argument is:
P1: Heads introduces a barrier to successful unnoticed compromise requiring significantly more resources and time to be spent than e.g. just standard coreboot.
P2: attackers don’t have infinite time and resources
P3: attackers want to be as efficient as possible with their resources
C: an attacker is likely going to try to avoid having to compromise a Heads system

I also made the following argument:
P1: using alternatives, such as Boot Guard or AEM, requires making extensive use of closed source firmware or even leaving the ME enabled
P2: using large amounts of closed source firmware or the ME is undesirable for a variety of reasons, including lacking ability to trust what it does (e.g. you said about AMT: “just disable it”); how can you trust that your proprietary BIOS actually disables anything when you flip some software switch?
C: even if Boot Guard / TXT etc. can provide good security in certain contexts, there are too many downsides

Neither of these is circular. Your “argument”, however, actually is:
P1: Heads is security theater, because flashing malicious firmware compromises it.
P2: Successfully crafting and applying that firmware is so trivial I don’t even need to demonstrate it.
C: Heads is security theater, because I could compromise it easily (but refuse to demonstrate).

Except if the keys are leaked, stolen or sold or the hardware is switched out or an exploit found etc. Nothing is “solved” here and asserting otherwise is incredibly disingenuous. You can’t even know if the code is high or low quality, because you’re not allowed to even look at it, much less adapt it.

Of course there is a root of trust; it can be compromised, sure, but so can the keys; further, as I pointed out many times, your “solution” requires to leave enabled a literal back door with beyond-admin rights and network access.

I already explained that messing with tens or even hundreds of millions of people by pushing bad microcode updates would do enormous economic damage to the company and so is not going to be done; telling some organization that is in bed with you how to use ME for remote compromise without you even being able to know, is much more acceptable.

I gave plenty; the fact that you keep ignoring them is evidence of either maliciousness or reading comprehension issues on your part.

So, in conclusion, you don’t know what circular logic is, yet accuse me of it, while making circular arguments yourself, apply double standards, treat something as “security theater” that clearly makes it harder to compromise a system, which is also clearly recognized by the more knowledgeable people here, e.g. why do you think there is discussion of creating an entire new section of this forum dedicated to Heads if it’s just “security theater”?

I have stated that Boot Guard type solutions can have merit, depending on the threat model, but you seem to primarily interested in throwing dirt at projects you don’t like without making coherent arguments. I also think I remember someone with a user name similar to yours trolling developers on the mailing list. Maybe some here disagree, but with your biased, lying approach you do seem just like a troll to me, so I will put you on my ignore list now.

3 Likes

This discussion is happening over and over and over.

I reserve myself the right to edit this post, but most of it was covered under https://forum.qubes-os.org/t/how-exactly-is-heads-pureboot-secure/23092/3

@TommyTran732

  • Coreboot measured boot, without IBB measurement in pcr0 by hardware root of trust (CPU+ACM blob and CPU permitting that, which is not even Haswell as proven with research and development but reserved to server platforms) limits RoT in bootblock to initiate measurements of stages. You are right saying that it’s a circular logic. Bootblock has to measure itself without something proprietary doing it (CPU+ACM+sinit) and then loads next coreboot stage which does the same up to the payload. Under coreboot, that TCPA/TPM Event log is exposed through cbmem -L under Heads or the OS.
  • The theoretical tampering you are talking about is possible. But doing so would require each stage to report to TPM a forged measurement instead of using hashing functions on mapped content and dynamic addresses. That means each stage woukd need tampering. Possible but not trivial is the point here.
  • Heads has evolved and continues to evolve to mitigate this. The unfortunate truth about this is that the root of trust, while keeping ownership of the boot chain, resides in the bootblock. And the bootblock continues to change in functions, in size and therefore is not a static part of the firmware that can easily be write protected without requiring external reflashing sporadically when coreboot changes something in it.
  • If/when the bootblock stopped changing, the bootblock region itself could be write protected, addressing most of the concerns you raised in many threads but not under Heads.
  • Heads changed to address most of the concerns you raised here but enabling authentication prior of recovery and USB boot access. This means that to get the firmware backup you are talking about, one would need physical access to the SPI chips. Here again, this varies across models, but this is not trivial. You seem to laugh at blink comparison of nail polish, but this is actually the way to protect technological assets without giving up on your freedoms. There is also epoxy. And that epoxy can be applied directly on SPI chips if you will. And rhen understand why nail polish is preferred.
  • Introspection helpers are being developped to permit extraction of regions that are supposed to be measured by coreboot stages. As you know, the sealing unsealing in/from TPM of secrets (TOTP/DUK) are happening from the payload (Heads), which is measured by coreboot in TPM. So if the payload extracts those stages (bootblock up to payload) and then measures it from initramfs (flashtools/cbfs tool, sha1sum/sha256sum from busybox, tpm toolstack depending of TPM version and signatures depending of GPG), faking extraction of those raw regions would require those additional fake regions to be added into cbfs and FMAP, which is imposible. One cannot simply generate a raw image that would fit checksums. Therefore you imply that all those tools are tampered with to also report either always good or have been tampered somehow to answer differently to some specific entry. In all case you imply that all those tools would report faked results, while introspection measures of real content would succeed.
  • With what I read of your threat model, your need is to have physical access tampering out of the equation. That is plainly impossible. Even with bootguard. As I keep repeating everywhere, opsec is the only real solution. Not blind faith trusting on Intel to do the right thing and slowly remove proper opsec because of always broken promises. Vendors have been found having debug access through exposed pads. There are service pins on some motherboard to bypass locks. Usb-C debugging permitting ram access. This is far west and we could go in all directions : this depends on manufacturing choices, hardware design, schematics access, platform security and then weight that against total owership of a user over the boot process.
  • Things are moving forward, also said in other threads. Unfused bootguard is a more and more common thing. Dasharo is working torward a corporate offer to fuse those prior of shipping, which you could rhen trust deploying signed UEFI update capsules that CPU+ACM+sinit woukd be able to have bootblock measured as part of IBB with Vboot signature of their own. That seems to be what you are looking for: trusting a third party for firmware updates, but based on audit able code. Therefore that would be a good compromise: you woukd have introspection capabilities over the firmware binaries that are reproducible and can build yourself to verify content matches what was delved if you will, but not trust manufacturers with completely closed source firmware and surrendering to blind trust.
  • Heads enforces platform locking on pre-skylake platforms. Post skylake would have to trigger SMI the same way to offer the same protection. When activated by Heads, the OS cannot write to firmware. That leaves Heads as the only internal flasher for upgrades outside of external flashing.
  • Combined, the actual state of latest Heads version permits all that: oem provisioning of OS, tpm, security dongle for tamper evident shipping to customer. Customer re-ownership to reown all security components with own keys and passphrases. Platform lockdown. Detached signed base authentication prior of going to recovery shell, which is the only place you could take a backup of firmware to tamper it and then flash it back. How would you accomplish that today? Which then leads to physical SPI access subject and falls back to bootguard being the only solution, even if that was also subject to recent talks showing possible bypasses.

I wish those important exchanges were happening under Heads and not across 6 different topics just here on this forum. Hear me out, I hear your concerns. But this is no “security theater” but best effort with the possible collaboration and platforms exposed capabilities. We talk about security policies that can be improved here. This is opposite of blind trust.

Hopefully this conversation will be/stay constructive. My time is limited but collaboration could only make it better.

Tldr: you could, today, write protect the bootblock region to have bootblock considered a hardware backed RoT. But that is not efficient considering bootblock still changes over coreboot versions and changed again quite recently. Therefore, this is considered advanced and requires from users willingness to do this manually considering they might need to flash again externally and ré write protect a new bootblock region of different size and content in the future. If you are interested in that topic, you can read more about that here SPI write protection - Dasharo Universe for kgpe-d16 example.

5 Likes

Wrong. Her PoC was provided by herself, because she actually knows what she is talking about:

Dude it’s literally just xinput. What on planet Earth are you talking about?

Your claim is that Heads is trivial to compromise and that it’s “security theater”. Prove it.

Already pointed out…

Here’s an idea: since there isn’t usually any machine-to-user authentication, just open the case, switch out the mainboard+CPU for compromised ones and seal it back up. This will be quicker than having to analyse output of Heads measurements on the fly by far and could even be combined with a relay attack. Of course the actual implementation would need to work on the details, but you obviously don’t care about such “trivialities”.

systemd-cryptenroll and bind that to the PCRs. Good luck with that. How are you gonna deal with that now? Your entire attack is gonezo bye bye.

Now those keys likely were part of a debug build or for future release so it likely didn’t affect client production systems, but this is exactly one of the many problems of putting ultimate trust in some people you don’t even know.

So you would rather have no root of trust instead?

Wrong again. You need to actually analyze the system to determine the PCRs and modifications and then craft your compromised version accordingly; this is long, difficult work and likely would require opening up the machine twice on separate occasions, hoping that the firmware hasn’t changed in between. In fact, as I’ve pointed out, swapping out the hardware would be easier, especially if there is no machine-to-user authentication, but Heads does provide it.

The PCRs can just be read by stuff like tpm2_pcrread.

hoping that the firmware hasn’t changed in between

What?

Why would I make it easier for the attacker? HOTP/TOPT makes it harder to compromise such that I wouldn’t notice, because it forces the attacker to learn a lot more about the system first, including things the attacker can’t know without getting physical access and opening up the machine. You keep assuming the triviality of the compromise without being willing or able to prove it.

Theatre

attackers don’t have infinite time and resources

They can just write the firmware first then adapt it to whoever the target is after…

P1: using alternatives, such as Boot Guard or AEM, requires making extensive use of closed source firmware or even leaving the ME enabled

So what?

P2: using large amounts of closed source firmware or the ME is undesirable for a variety of reasons, including lacking ability to trust what it does (e.g. you said about AMT: “just disable it”); how can you trust that your proprietary BIOS actually disables anything when you flip some software switch?
C: even if Boot Guard / TXT etc. can provide good security in certain contexts, there are too many downsides

This is the classic it’s open source so its trustable and it’s proprietary so it’s untrustable fallacy.

Except if the keys are leaked, stolen or sold or the hardware is switched out or an exploit found etc. Nothing is “solved” here and asserting otherwise is incredibly disingenuous. You can’t even know if the code is high or low quality, because you’re not allowed to even look at it, much less adapt it.

If the key gets leaked it goes down to no root of trust just like Heads.

Of course there is a root of trust; it can be compromised, sure

That is not a root of trust of an attacker can just arbitrarily override it.

So, in conclusion, you don’t know what circular logic is, yet accuse me of it, while making circular arguments yourself, apply double standards, treat something as “security theater” that clearly makes it harder to compromise a system, which is also clearly recognized by the more knowledgeable people here, e.g. why do you think there is discussion of creating an entire new section of this forum dedicated to Heads if it’s just “security theater”?

Oh boy classic Dunning-Kruger effect

1 Like

Returning to the original question,

here is one more relevant link:

2 Likes

Ok, I got another related noob question. When I first herd of AEM and read a little bit about it, it appeard that if I just deactivated TPM that would solve the problem from the get go. Or was that a huge mistake? Thanks.

1 Like

Solve what problem?

For anyone wanting to contribute to this, here is a refresher in layman’s terms to get you up to speed:

The Premise of the Debate

  • If you power off your computer and leave it unattended, how can you be truly sure that it’s exactly the same as it was when you left it, and that nobody has tampered with it while you left it unattended?

Background Context

  • An Evil Maid attack is someone physically interacts with your computer in an attempt to compromise it, while being able to justify being physically close to your computer under the guise of another task. The classic example is the hotel cleaning maids entering a guest’s room and compromise their computer while they are not present or paying attention, and pretending to perform housekeeping duties, which is where the name comes from.
  • Knowing that the typical boot meta-process is Power On → BIOS/UEFI → Bootloader → Kernel → Init System → Userspace, and that each stage of those components usually has absolute authority over all the stages after it (also known as the “Boot Stack”, because each step is “stacked” on top of the one before it); each stage can be manipulated by the one before it, to trick the user into thinking “everything is fine”
  • A heavy focus of this debate is the BIOS and CPU microcode. Whilst there have been quite a few attempts to create a 100% open source hardware computer (all the way down to what’s actually running on every single chip on the motherboard), usually these are proprietary, and the manufacturer often never releases the source code for what’s actually running on these chips.
  • Instead, they publicly declare a list of functions they have made that you can make their proprietary firmware perform. These are usually referred to as “system calls”.
  • Whilst descriptions of these “system calls” have been provided by the manufacturer, there are cases where the description of a function is vague, incomplete, or sometimes not even an accurate representation of what it actually does.
  • Despite a manufacturer hypothetically claiming that “no other functions exist”, this claim is impossible to verify in practice, even through firmware dumps. This is mainly because in order to do so, you have to have a full schematic of all the transistors in the chip (which they won’t give you), and figure out the purpose of each of those transistors, and what happens to the other ones when you turn it on/off. (If this sounds painfully tedious to you, it’s because it would be, which is why not many people chose to actually do it :stuck_out_tongue:)
  • Most current open source solutions that exist to solve this dilemma are generally made with little to no help from the manufacturer of the hardware, and as a result, are often the result of reverse engineering. In other cases, the open source solutions may be forced to call the proprietary functions in order to achieve certain functionality. These residual bits of firmware are usually referred to as “binary blobs”, because nobody knows 100% of what it actually does, but they know that it is needed for the hardware to function.
  • Currently, some methods of verifying the state of your machine include:
    • Relying on an entity (e.g. the manufacturer) blindly stating that “it’s good”.
      • The emphasis here is on the fact that you can easily verify that it was the entity who said it’s good, and not someone else; as opposed to the “good-ness” of the machine state.
      • This relies on effectively knowing that the entity has kept their secret/key safe, as this method is useless if the secret/key in question is not exclusive to that particular entity.
    • Taking a snapshot of your computer with you when you leave it (e.g. on a USB stick, a Timed One Time Password {TOTP} that is calculated from that snapshot), for comparison when you return
      • This snapshot has to have enough detail in it to be able to make a sufficiently accurate comparison to be able to detect tampering
    • Verifying with a third party (another hardware component, a cloud provider, command and control server, etc.)
  • Comparison of states will usually involve some sort of algorithm that includes randomness, to ensure that the actual secrets/keys are not leaked. For example, instead of a chip asking directly for the key, it might ask for the square root of the key minus the current time to the modulo of the current month. This is a question that cannot be answered easily without actually knowing the secret/key. It also will allow it to prove that it knows the key, without actually “shouting it out explicitly” over electric wires, radio waves, memory registers, or to other hardware components that it might not necessarily trust, either. This is referred to as the “challenge-response” method.
  • In hardware, generally the only thing that prevents anything with memory in it from being read or written to is some form of software. If that software is not present/running, then it is generally permitted.
  • When overwriting memory registers on an IC chip (commonly known as “flashing”), a higher voltage/current is often needed than simply reading the contents. Because of this, there are some chips on the market that contain fuses/circuit-breakers in them or connected to them, which will do something to the chip if reflashing is attempted, such as permanently activating a circuit in the chip that will tell the software that it has been tampered with, preventing the chip from functioning, or in some cases, short-circuiting the chip and “bricking” it completely.
  • Proprietary software is not necessarily “worse” than open source software. In fact some proprietary software is very well written (Adobe Photoshop, Microsoft Office, many video games, etc.) and fulfills its purpose very well, which is usually to provide functionality, assist users in completing certain tasks, and generate revenue for the creators of the software.
  • Generally, claims of integrity of proprietary involve what is known as a “code audit”. This is where a “trusted third party” (the definition of this is pretty vague…) signs a non-disclosure agreement with the creator of the software that says something along the lines of “Ok, the creator of the software will pay us some money, and we will look at the source code, and then publicly state whether we think that it is good or not. We will also promise never to reveal any details about the actual source code, and we are aware that the only reason they are paying us is because they actually want us to say their source code is good, and if we don’t say their source code is good, then we probably won’t ask us to do this again, but ahem…*cough, cough*…we most certainly will make an impartial assessment of their codebase…” :melting_face:
    • If that sounds ridiculous to you, it should. This is how most of the proprietary software industry works. In fact, most insurance companies will not insure your business unless you do one of these…
  • Therefore, it cannot be denied that claims of integrity of proprietary software can never be fully proven nor verified. They will always rely on how trusted the party claiming the integrity is.
    • Just because the chef says the cake is “gluten-free”, this claim cannot be truly verified unless the recipe is obtained and performed by a trusted neutral third party, and the identical end result is achieved
  • Coreboot aims to have a BIOS that you can build and customise yourself, with the source code for as much of the codebase being open source as possible. Some binary blobs are included for some chips.
  • Heads is, more or less, a flavour of Coreboot that creates a digest of the state of your computer, and compares it to a previous state. The user can then decide what to do next.
  • If a program is asking another program for a particular state of an entity, it is possible to force that program to simply say that it checked and that everything is good, when in reality it didn’t even checked. This is called “spoofing”.
  • More advanced methods of verification will include checks on the program, such as watching memory registers to verify that the check was performed, forcing the other program to give more than a simple “CHECK PASSED” or “CHECK FAILED” response, such as including proof of work, pre-shared secrets, challenge-responses, timestamps, or some other piece of secondary information that will prove to the other program that “everything actually is good”.

—-

So all of this is essentially what Evil Maid, Coreboot, and Heads are.


Now, if you deactivated your TPM, @joe-mandalore, you have basically instructed your motherboard to skip all checks in the boot process involving the TPM. This isn’t necessarily a bad thing, particularly if you don’t trust the TPM itself.

But, your computer is skipping quite a lot of integrity checks when it boots up, which in some circumstances, might mean that any potential tampering to your computer might slip through undetected.

If that’s what you want, then there is no issue. But I would wager that most of us aren’t at all comfortable receiving a simple “Everything is fine. Shut up and use your computer” from an entity we don’t necessarily know and/or trust. We’d likely want to know more about how they came to that conclusion, and ask to see proof of work.

Again, it’s all about “knowing your machine and how it works” in as much detail as possible, so you can come up with the best solution that meets your individual circumstances :slight_smile:

4 Likes

Is the discussion only about leaving the computer off and out of sight? Because none of the solutions are able to completely protect against those threats. I think the discussion would be more useful if it was discussed in mind with using in combination with glitter nail polish as tamper detection. Tommy said if you use glitter nail polish then you don’t even need to use Heads.

Glitter nail polish has a chance of being defeated by a very talented person who has plenty of time and no pressure but only a chance because if they make a mistake then it will be detected, but its still a chance they succeed and getting passed glitter nail polish without detection.

But the adversary don’t have unlimited time.

You could have a rule to never leave the computer for more than 4 hours. If you need to go somewhere more than 4 hours then you take the computer with you. You can also have another security device to check if there is someone inside your home/room such as a camera or a motion detector. If that goes off then you urgently return to your home/room to make sure everything is ok. Just say to your boss there is a burglar in your home and go. That would make it impossible to get passed the glitter nail polish without tamper detection because they would have too little time to do it.

But does this really mean we don’t need Heads? No because although glitter nail polish protects from physical tampering, we need a SRTM and DRTM solution to protect against userspace threats such as downloading and running a malware. I wonder if it’s better to use DRTM instead of SRTM for that job.

2 Likes

@capsizebacklog is right. I hadn’t considered that earlier. This is also very much a factor for people who run machines where their chips are writable from the operating system.

2 Likes

I didn’t say that.

Glitter nail polish has a chance of being defeated by a very talented person who has plenty of time and no pressure but only a chance because if they make a mistake then it will be detected, but its still a chance they succeed and getting passed glitter nail polish without detection.

They don’t need to do anything. They can just open up the laptop without a care about the glitter nail polish because chances are you won’t noticed it until you have been pwned. Seriously, who actually checks the nail polish every single time they boot a laptop?

At best it is tamper evident, but it absolutely does not protect you against tampering.

No because although glitter nail polish protects from physical tampering

It doesn’t.

we need a SRTM and DRTM solution to protect against userspace threats such as downloading and running a malware.

None of these can protect against it. That is not even remotely part of the threat model.

2 Likes

I didn’t say that.

Here is the quote of you saying you don’t need Heads if you use glitter nail polish. I don’t know how to properly quote but you wrote that in this topic on second page.

If you think that you will magically notice tampering with your eyes (checking your nail polish or whatever), then you don’t even need Heads to begin with.

They don’t need to do anything. They can just open up the laptop without a care about the glitter nail polish because chances are you won’t noticed it until you have been pwned. Seriously, who actually checks the nail polish every single time they boot a laptop?

Your argument is glitter nail polish doesn’t do anything because no one will take pics and compare? I’m surprised to see such a comment here on QubesOS forum. I think people who use glitter nail polish actually do check. Takes maybe 5 minutes to check after it has become a routine. You are also wrong because it’s not necessary to do that every time you boot the computer. You should do it when you have left the computer alone and there is a chance an adversary could have had the opportunity to tamper with it.

It doesn’t.

You are correct that glitter nail polish don’t stop tampering. I just thought everyone would understand I meant it would detect the tampering. Then you know you can’t trust the laptop anymore and avoid getting hacked.

3 Likes

Use this command anywhere on Discourse to start the beginner tutorial:

@discobot start tutorial

For convenience, I have provided the main quotes you were referencing:

3 Likes

Thanks alzer89. Ya, I thought disabling TPM would void any Evial Maid attacks. I am beginning to understand better now. I am currently running Qubes 4.2.3 on a 2023 acer Predator laptop that originaly came with Windows 11 on it. I had to disable the TPM just to erase/format over it and install Qubes. I do have my Bios Password protected, but I am starting to understand that’s not the best. On the flip side I am not taking my laptop out of my appartment anytime soon, so I have some time to try to figure out how to do some of the things mensioned here. thanks.

3 Likes

The TPM was basically doing what it was designed to do. It saw the Qubes OS installation as “tampering”. You can solve this by clearing the TPM.

It’s neither good nor bad. It’s just “another tool in your arsenal” that has its use cases :slight_smile:


Excellent. Keep expanding your knowledge. Well done!

2 Likes

People who value their security. Also, not every reboot but every time you left it unattended in an untrusted place for a somewhat long time, which may be rare for some people.
Also, even if you’re lazy and do it “too late”, the attack is still eventually discovered, which is still very valuable.

3 Likes