I understand that the TPM is a passive chip. The point about checking timing does not require that the TPM is active. It can check the time as part of processing the requests it receives.
There is no timing difference to be checked if the firmware already knows which value to use to lie to the TPM
The paper describes sending data to the TPM in multiple pieces, not all at once. You can send a value to âextendâ a current value which the TPM will combine internally. So the TPM could be programmed to expect that a specific number of values are sent within a specific set of time windows, and that the final value which the TPM calculates based on the series of inputs matches the expected value.
For the benefit of anyone reading this thread, I again want to emphasize that I am speaking only in theory. I do not know whether or not this logic is implemented in Librem hardware and I make no claim that it is or is not implemented.
This is a ridiculous statement. Everyone has their own threat model and this one can be good enough for some people. Also @marmarek
disagrees with you. Also AFAIK itâs the only way to provide security while keeping the user ownership of the hardware without blind trust in any corporation, including Purism.
Discussed here: https://forum.qubes-os.org/t/how-exactly-is-heads-pureboot-secure/23092. Key link from there: Notes on how to audit a maximized flashed firmware image ¡ Issue #107 ¡ linuxboot/heads-wiki ¡ GitHub
This is a ridiculous statement. Everyone has their own threat model and this one can be good enough for some people.
It is objectively worse than Boot Guard, which is available on standard laptops. You havenât even pointed out how it is not theatre, especially when compared to Boot Guard.
Also
@marmarek
disagrees with you.
No he doesnât. You canât defend an attacker with a programmer. The glitter business is pure luck, and you wouldnât notice it until itâs too late.
Also AFAIK itâs the only way to provide security while keeping the user ownership of the hardware without blind trust in any corporation, including Purism.
Except its not how the world works, and entities like Intel are still part of the TCB. You just crippled security by making it possible for an attacker to be able to flash malicious firmware and still has it boot normally.
What ownership? You canât even make the device not boot malicious firmware.
Discussed here: https://forum.qubes-os.org/t/how-exactly-is-heads-pureboot-secure/23092. Key link from there: Notes on how to audit a maximized flashed firmware image ¡ Issue #107 ¡ linuxboot/heads-wiki ¡ GitHub
Exactly. I made that discussion. Nothing in that thread contradicts what I said. This is substantially worse than Boot Guard and cannot protect against tampering like Boot Guard can. Did you even read what you linked?
I thought the glitter nail polish could create a unique pattern that would be difficult to duplicate.
The pattern could be examined by an App on my phone, and compared to a previous image. Images could be routinely send â to another computer to verify that I am using the correct one.
Or is the image of glitter nailpolish easy to duplicate?
Or is the compare program on the phone not accurate enough?
Or the entropy of the glitter nail pollsh have too low an entropy?
I would not be surprised that groups like the NSA could duplicate an entropy image. They can spend a lot of money to create new technologies that can accomplish nearly anything.
Sure, on a long enough timeline, the survival rate of anything drops to zero.
They probably also have access to the source code used by all the major laptop manufacturers, and can easily produce modified firmware for any major brand laptop. Then they just need to get the manufacturer to signed it, if they donât have their own copy of the Boot Guard signing key.
The same people that can âeasilyâ break into your house and flash a modified version of Heads, can just as easily do the same thing with Boot Guard protected devices, the Boot Guard signing key is not a fortress.
Are you gonna check the glitter nail every time you boot your computer? Or are you gonna slip and ignore it from time to time?
maybe just when the computer was outside my control.
But my question was. Is it really a waste of effort, even if I checked it every time I was about to boot up?
EDIT: 7-11-2024
Tommy Tran raises a good point. I am not likely to do this test at every time I power up computer. Even if I did the test once a day. I would have some means to detect if something had gone wrong.
Let me say it another way. I donât trust in any one method to make my computer efforts perfectly secure.
But
I want to do what I can to make it difficult for anyone else to interfere with my computer security.
However, even if I felt the Entropy in glitter finger nail paint was great enough to offer some means of verifying the computer had not been opened, my next problem is whether the phone app that measures the glitter, had not been corrupted.
I am just speculating, but securing the pureboot (Heads) TPM could be maybe improved like this, assuming the TPM is passive:
-
Level +1: Purism would include in the firmware a user/personal passphrase chosen by flashing the firmware and encrypted by the librem keyâŚAt boot, it would display the passphrase so that the user should be aware of tampering (replaced firmware). So, if the firmware is replaced with a corrupted firmware version, ti would not display the correct passphrase. How couldd the corrupted firmware lie? The hack would be to read the firmware, extract the passphrase and include it at the write place in the corrupted one, build the firmware and hack, This would need time.
-
Level +2: the passphrase could be stored in a separate chip on the hardware, encrypted by the librem key, maybe.
@TommyTran732 : Are you going to check the news and replace opsec by blind faith every time you boot your computer? Or are you gonna slip into denial from time to time, not considering that key leaks are not a question of âifâ, but âwhenâ and âhowâ with OTP?
Just happened:
Current and past leaks:
Convenience vs security. Nothing else.
I updated Heads conferences and lectures. Pureboot is a fork of Heads.
- Conferences and papers About | Heads - Wiki
- Article: About | Heads - Wiki
- Community: Community | Heads - Wiki
Are you going to check the news and replace opsec by blind faith every time you boot your computer? Or are you gonna slip into denial from time to time, not considering that key leaks are not a question of âifâ, but âwhenâ and âhowâ with OTP?
Secure Boot is completely broken on 200+ models from 5 big device makers
I do follow the news, yes. Also, I have 0 clue what you are even trying to say here. This is a case of the UEFI Secure Boot Platform Key being leaked, and not Boot Guard keys.
Just enroll your own PK and it would solve the issue, idk what the big deal is. Itâs not permanently tied to the device.
To compromise a reasonable computer with Boot Guard, and assuming there is no serious vulnerabilities with the firmware, you need to:
- Have physical access (specifically, they have to open up the laptop and flash to the EEPROM).
- Either do a downgrade attack or have a leaked key
If the attacker has physical access, they can just defeat Heads anyways, where as with Boot Guard, they need some sort of exploit to defeat PCR 0 when doing a downgrade attack or have a leaked key.
If I have a device with leaked Boot Guard keys and I am aware of it, I will just buy a different device. I also follow the good security practice of switching devices before it goes EOL, so its not like I am stuck with the same device/key forever.
If I have a device with leaked Boot Guard keys and I am not aware of it, well, it only goes down to about as insecure as Heads. I donât see what how âkeys sometimes get leakedâ justifies using Heads over Boot Guard.
This post is an echo chamber. I would recommend joining Slack OSFW group security-discuss channel, where all expert people discuss those things. Heads is attempting to change things but cannot work alone. We miss RoT anchor which cannot be bootguard. Whatever you think.
You can read PKfail: Untrusted Platform Keys Undermine Secure Boot on UEFI Ecosystem and believe what you want, really.
I wonât loose my time explaining things again and again, I gave up on that. At best you sound like an employee of Intel, trying to defend a rotten architecture with trust anchors and programmed obsolescence policies going against everything transparency the ecosystem actually needs to move forward with.
Give a PoC showing measured boot and bootblock is dead. You will get everyoneâs attention. Otherwise you just promote status quo. Not sure itâs helpful, but you do you.
You can read PKfail: Untrusted Platform Keys Undermine Secure Boot on UEFI Ecosystem and believe what you want, really.
So what? You didnât know the PK can be enrolled by the user? Or you didnât know changing the PK trips the PCRs?
Leaked PK is a legitimate issue, but you are pretending like the user canât just trivially enroll their own. Last I remember, I enroll my own PK on my Linux laptops.
And no, the attackers canât just change the PK without me knowing. PCR 7 will trip and the encryption key will not get released.
I wonât loose my time explaining things again and again, I gave up on that. At best you sound like an employee of Intel, trying to defend a rotten architecture with trust anchors and programmed obsolescence policies going against everything transparency the ecosystem actually needs to move forward with.
No man. I just hate it when people advocate for stuff that are objectively worse than the status quo, then pretend like itâs secure to sell overpriced products. Pretty unethical donât you think?
trying to defend a rotten architecture with trust anchors
If you legitimately think Heads is somehow better than Boot Guard, then leaked Boot Guard keys would be amazing, wouldnât it? Now you can sign your own firmware with their keys
Give a PoC showing measured boot and bootblock is dead. You will get everyoneâs attention. Otherwise you just promote status quo. Not sure itâs helpful, but you do you.
So is this the classic âmy logic is obviously broken but I will pretend like itâs not an issue until you write a POCâ?
You just conveniently ignored the main point of Heads, which is transparency, verifiability and openness. Maybe Qubes isnât for you, since you do not value that?
This is only obvious for you and nobody else. A PoC would make it obvious for other people, too, if it was your goal.
You just conveniently ignored the main point of Heads, which is transparency, verifiability and openness.
So what? Itâs less secure than Boot Guard, so donât sell it as a more secure product for thousands of dollars. I wouldnât have a that big of a problem with Purism if they didnât claim itâs âhigh securityâ and âfor people at riskâ. That just crosses an ethical line that really shouldnât be crossed.
Maybe Qubes isnât for you, since you do not value that?
Since when did you get to decide what I value and why I use a particular operating system?
Let me tell you what⌠I value maintainability and security above all else. I daily drove Qubes in the past because it was the most secure system that I could reasonably use and maintain. Nowadays I mainly use a different setup with better security properties for what I need and Qubes is mostly for anonymously browsing the internet. This is what works for my threat model.
I care plenty about âtransparency, verifiability and opennessâ, unlike your characterization. But that comes after security. I make plenty of open source stuff (permissively licensed btw) and contribute to various projects, but I donât go around smearing other people because their stuff is proprietary, and pretending like my stuff is more secure.
This is only obvious for you and nobody else.
I explained the immutable RoT stuff countless time. I am fairly sure other people understand that if you donât have a immutable RoT, an attacker can just replace your RoT and compromise you from there.
This completely depends on your threat model. Please do not force your own threat model on other people. I do not trust non-verifiable software written by huge corporations. I avoid it as much as possible (which means not always!).
You have the right to follow a different threat model, but you do not have a right to claim that companies offering me a transparent system fitting to my threat model âcross an ethical line that really shouldnât be crossedâ.