I already use coreboot, and I like the idea of Heads, but I don’t completely understand the whole GPG thing. Does your disk encryption end up depending on GPG’s asymmetric encryption? Does it make you more vulnerable to having your disk imaged and broken with quantum computing in the future? Also, does it even make sense to use heads if you have other systems with the same data?
It’s more for the signing of your disk files and kernel, so you can see if they have changed since you last booted.
We don’t know enough about what quantum computing will turn into yet, but it’s likely that it won’t make any difference to how vulnerable conventional encryption is to it. The quantum world will likely break encryption regardless. Time will likely remain the defining factor.
Either that or saving any kind of file will take at least ten minutes, because we’ll be encrypting it with 2^256-bit encryption keys
Heads’ main selling point is (among other things) to be able to verify that your machine is exactly the way you left it, by hashing and signing every aspect of your machine before you leave it unattended. It does this with an awesome mathematical formula, and it saves the result onto something external to the machine (usually a USB device, but can also be a network device).
Then, when you come back to it, you run the same mathematical formula, and if you get a different result, then your machine is not the way it was when you left it
Heads does this as soon as you turn the machine on. This is important because (assuming you trust it) it is the gatekeeper to your machine. Heads starts first, and then everything else has to access the machine through Heads. This is important when, for example, a user space program (eg, “System Information” or lspci) probes hardware. It has to ask Heads for that information (via the software stack).
Programs only do what they’re told to do, including, if they’ve been told to, lying to the end user.
So if any program in that stack trace has been told to lie to you and say “your hardware’s all good!”, well……
Can’t it use TPM? I’m not really enthusiastic about plugging any usb devices at boot time. And networking at boot time sounds like bad news too. That’s one of the reasons I want to replace my UEFI - it’s another bloated OS that tries to do what it wasn’t supposed to do.
If anyone had successful experiences flashing Heads and running Qubes, please do share them.
If I understand correctly, the TPM stores the disk encryption key which is encrypted with the user’s disk unlock password. Hope @Insurgo or others will correct me if I’m wrong, but my understanding is that GPG’s asymmetric encryption is used in head’s integrity verification (i.e firmware attestation, if I have the terminology right) but not directly in head’s release of the disk encryption key (think that is using symmetric encryption but not certian).
@alzer89 appreciate if you could provide more info about external saves to things other than the TPM? Really would like to know more.
I stuck with legacy heads roms until it became clear maximized builds were most tested for exactly this reason. Networking can be convenient, especially for totp time drift, but I’ve also wondered about it from a security perspective.
Not trying to be snarky, but it probably depends how you protect that data elsewhere.
Wish I had something to contribute to your quantum computing question, but can only speculate and agree with alzer89
What is GPG used for: detached signature of /boot digests. Nothing to do with TPM Disk Encryption Key
True. GPG is used to verify detached signatures of digest created for /boot content.
That means the user needs to connect USB Security dongle to sign /boot content after that content has legitimally changed (dom0 update? reboot right away), relying on an unexposed private key residing under the USB Security dongle while the private key should never leave it.
Detach signatures are used to both verify authenticity/integrity, and it does so because user’s public key is fused inside of the firmware, and that public key is verified per measured boot, actually being the first thing measured after coreboot has reported its measurements and booting its linux coreboot payload. More info: Keys and passwords in Heads | Heads - Wiki
Maybe this is the source of the confusion here? There were discussions before dom0 received Fedora 32 support. I have not personally revisited the concept, being myself not totally convinced this approach being neither equivalent nor superior of the TPM Disk Encryption Key support offered by Heads. But the discussion could happen under Heads issues to bring it back to focus, where following what I read here, people seems to be more against then for it, at least here.
Heads offers measured boot not verified boot
The main reason Heads went with measured boot, as opposed to verified boot, is that going the path of verified boot requires a OEM/ODM to sign the firmware image, which goes against the principles of user freedom. A user could not sign the firmware himself (at least easily). A Continuous Integration system could not build such image and sign them without having to expose somehow the private key used to sign such images. Basically, verified boot images are useful for projects building releases where trust is given to the organization that builds and signs the firmware images with their keys.
Also, going the verified boot would require duplication of some precious flash space which is already a struggle when attempting to pack up actual Heads needed tools to accomplish its work.
Verified boot normally comes with a RO+RW copy of the firmware. It could be nice to have a RO, write protected RO recovery environement that contains for example flashrom and busybox, but that alone would consume still important space and would not fulfil expectations of a vboot environement and would still require external authority to sign the rom images.
So with measured boot, really early at boot, coreboot measures itself from the bootblock (first phase of coreboot) up to the payload as described under Keys and passwords in Heads | Heads - Wiki (currently under PCR2) and then Heads payload is launched and measures stuff of its own in other PCRs. Then those PCRs are used to “seal/unseal” secrets. Under Heads, those secrets are paired with external devices. TPMTOTP generates a Qr code to be scanned by a smartphone 2FA app to produce on both devices a TOTP code that changes on both laptop and smartphone’s app every 30 seconds. When HOTP USB Security dongle is used, that secret is used as a challenge to “remote attest” against the USB Security dongle to make that dongle flash green/red as a result of the challenge, where Heads will show on screen HOTP:OK/Error messages accordingly.
TPM additional LUKS Disk Encryption Key keyslot
Additionally, Heads permits to use TPM internal memory to “seal” another secret: an additional LUKS Disk Encryption Key. This TPM nvram held LUKS encryption key is actually an additional LUKS header unlock key passphrase. That TPM Disk encryption key is sealed in the TPM as a result of firmware measurements, Heads gpg keyring, gpg trustdb and Heads configuration override injcted in firmware, Heads loaded kernel modules, Recovery shell access state (no recovery shell involved on boot path) and finally the measurement of the LUKS header after having added the TPM Disk Encryption Key into and additioanl LUKS slot. You can see it in action with related documentation details under Keys and passwords in Heads | Heads - Wiki
TPM offers efficient rate limiting while genrating 256 characters password for additional LUKS key slot
I’m not sure on which angle to take and answer that question other then the following documented entree: Keys and passwords in Heads | Heads - Wiki
So if we go back to what a LUKS Disk Recovery Key Keys and passwords in Heads | Heads - Wiki which is basically a LUKS slot key derived from the user chosen passphrase at install, the user is protected on the system itself by rate limiting: that is, dracut/plymouth other prompts to boot the OS will permit a limited number of entrees prior of failing to boot the system, requiring a reboot to attempt passphrase entry again. Then, LUKS (v1 vs v2) have different hashing mechanisms to rate limit decryption of the container. When the disk is attached to another computer, bruteforcing is eased, while still having to have each passphrase attempt actually being tried to unlock the volume. There are other attacks against LUKS, as for everything else, which attempts to bypass decryption requirements, by attacking the Header itself, but LUKS has evolved to limit success on those as well. What Heads offers there is that by setting a TPM Disk Encryption Key passphrase, the TPM itself will rate limit and block attempts to have the LUKS header matching key passphrase release, and won’t permit to release this passphrase unless the firmware, kernel modules, etc are in the expected state to even accept a user provided TPM Disk Encryption Key passphrase to release the LUKS disk encryption key passphrase to the OS in the form of an additional cpio passed to the OS in memory. Basically, the TPM Disk Encryption Key passphrase will be 256caracters long for that LUKS keyslot, and won’t be released if Heads went in Recovery shell. It effectively permits the user to be able to use Heads and TPM passphrase rate limiting, make the user more sure that the environment in which he is typing the disk decryption key passphrase is safer then the prompt presented by the OS itself (Heads verifies the detached signed files against user’s public key prior of prompting for the TPM disk encryption key passphrase) as well.
Adressing what Heads could do/Actually does today @alzer89 this is a bit false. Heads doesn’t validate anything else then pre-boot states. That is: coreboot stages states, Heads states, LUKS header. That means that the firmware is measured, Heads environement is measured, then /boot content is verified against detached signature (This is where GPG is used with USB Security dongle: the user signs kernel/xen/initrd/grub configs and everything else under /boot, which is verified against user’s gpg public key injected and verified in the firmware) and then, if a TPM Disk Encryption Key is added, will verify all the above plus the LUKS encryption header, make sure that the kernel modules loaded are the same as when that TPM added Disk Encryption Key was sealed, and if and only all the above is good, without Heads having went through Recovery Shell, will present the following to the user:Keys and passwords in Heads | Heads - Wiki
I’m correcting the facts, because Heads is not aware of the content of Qubes/OS (partitions content outside of LUKS header itself and /boot content) outside of what is exposed and detached signed under /boot. It is true that Heads “Remote attest”, but that remote attestation is against a smartphone (through TPMTOTP Qr code scanning and manual verification of TOTP codes at each boot) or through HOTP with a Librem Key/Nitrokey Pro/Nitrokey Storage which were made to interact with Heads through a developped interface/application.
That “awesome mathematical formula” is basically a XOR measurements applied on top of each other. Simple but effective.
It is true, though, that Heads could apply “remote attestation” to servers. But that would require completely different policies as currently active under Heads. Under Heads, “security policies” are the boot scripts selected under board configuration. All of them currently under github use the gui-init security policy (bash script) that is user-control oriented, and doesn’t rely on network by default, unless the user goes to Recovery Shell.
network-init-recovery is a rudimentary network based policy, which in actual state, syncs time locally on local router if possible then attempts access to ntp.pool.org servers and starts dropbear (ssh server) locally. This could be duplicated to integrate other work that happenes in that area, but what would be verified here? It might be interesting in the future, where reproducible builds are finally a thing again and PCR2 measures could be validated against known values for releases against a central source. Same for hashes of Qubes under /boot and this could go farther. But yet again, that would require synergies from all parties involved. Heads proposes user-controlled security mechanisms as of now. That unfortunately requires the user to be a bit more aware of the states of their systems, and make sure that the integrity of their own OSes are not compromised. Heads cannot currently do anything about that as of now, at least until Aiming to a readonly dom0 · Issue #7839 · QubesOS/qubes-issues · GitHub is addressed so that dom0 can be in a somewhat deterministic state from a user perspective, and then have its persistent user states offloaded so that dom0 is in a known and measureable state for everyone. But we are really not there yet. So consequently, that remote state would be limited to firmware state (PRC2) and /boot content. And then again, those /boot states, including grub.cfg customizations, would be difficult to manage since users of Qubes OS customizes everything, including their own initrd containing their own drivers upon dom0 updates!!!
That remote attestation server would most probably more interestingly be deployed under organizations that deploy machines for their users… And doing that… might attack user freedoms once again. User-control in my eyes is a good thing, while having kernel/iintrd/xen signed by Qubes OS would resolve all that needed complexity, and where PCR2 calculations could land under fwupd support once reproducibility of builds are fixed under Heads. Long answer, I know. And Sorry about that.
@alzer89 Heads doesn’t offer such protections/measurements as outlined in your post. And there is visible confusion here on how things work and how control happens, where and when. Coreboot will initialize the hardware on supported platforms, tweak IO configurations, make hardware available at the lowest level possible. If the hardware is natively initialized (eg: no Intel FSP, on blob-free coreboot environments) and then passes control to its payload.
Then Heads, being coreboot’s linux payload, will have drivers to make those devices work if needed under Heads. That includes basically needed drivers, including graphical, input, crypto kernel backend, filesystem drivers and some drivers compiled as modules including usb, ethernet etc so that Heads can offer its on-demand functionalities, including drive re-encryption (changing a disk encryption key doesn’t remove past keys to be restored to access encryption content), GPG key generation, LUKS unlocking and even LVM manipulation when Heads Recovery Shell needs to be used as… a recovery environement, which is the safest possible since it is measured. It also permits to boot detached signed iso, validating both integrity and athenticity prior of directly booting into Qubes/Tails/ArchLinux.
Then Heads kexec’s into the installed OS/USB. The OS won’t remember what Heads did before it, outside of the TPM PCRs having changed, cbmem content being accessible to the OS since coreboot initialized this to work as well. Just like Heads cannot revert what was locked by coreboot, the OS cannot undo nor see what happened prior of launching it. When kexec’ing, this is exactly what is happening: a kernel space is being swapped by another one. Heads here, when passing a Disk Encryption Key, is passing to the OS an additioanal cpio that is made evailable to unlock the LUKS container, just like if a passphrase was typed. You can see it in the last screenshot of that page: Step 3 - Configuring-Keys | Heads - Wiki paying attention to the --module /tmp/secret/initrd.cpio passed to the Qubes OS kexec call prior of Heads booting Qubes with a TPM released Disk Encryption Key.
Coreboot and Heads can and tweak some platforms initialization properties, but I think your analogy goes a little bit too far. For example, Heads decoupled recently USB drivers loaded by default on HOTP enabled board configurations USB keyboards (USB HID, specifically) by default. The reasoning behind that is that we would not want rubber ducky attacks to be possible under Heads unless the user accepts the risks and selects a usb keyboard enabled board configuration, which means that USB keyboard are activated early at boot and can fake keyboard keypresses, accordingly. Heads does not currently verify that PCI devices haven’t’s changed. Not that it could not be done. Just that it isn’t currently done.
Heads USES the TPM. Again: Keys and passwords in Heads | Heads - Wiki
Networking is possible, since the ethernet kernel drivers are compiled as modules, and can be loaded if needed. As explained before, loading such kernel drivers would modify the measurements and prevent the TPM Disk Encryption Key to be unsealed when typing the Disk Encryption Key passphrase, resulting in a different error then the bad passphrase here, which tells that the measurements are good Keys and passwords in Heads | Heads - Wiki
The reasoning with permitting manual network access is that users might want to go to Recovery shell and sync time from network if they use TPMTOTP to remote attest firmware integrity on their smartphone (Qr code based transferred secret into TOTP application on smartphone, or manually typing secret on screen into KeepassXC or whatever on second device) from TOTP code changing every 30 seconds on both devices. TOTP is a nice feature, but it requires time to be precisely in sync between the phone and laptop. Time sync is a problem that was never really resolved by humans outside of relying on an extrnal time source to sync time automatically, through NTP protocol. That can be done automatically or through network-init-recovery script from Heads Recovery shell. Here again, going to the Recovery shell will invalidate measurements, and a reboot will boot back into standard boot path but with time having been synchronized into the hardware clock. Basically, there is no network enablement on the normal boot path, and UEFI cannot be compared with tested linux kernel code, loaded only manually by the user and when needed by him. Heads doesn’t load anything that is not needed to boot the system, following what the user needs in his use case. For USB, unfortunately, USB is needed at each boot when HOTP is desired to provide remote attestation mechanism. As said higher in this post, USBHID is not loaded in kernel though, and that means that there is no keyboard input possible outside of the PS2 keyboard that is permitted. Note that there is no mouse driver loaded, no wifi driver even available, and that yo ucan review the kernel configurations provided to build Heads, which are pretty minimalist to also economize precious flash space for way more useful features. So Heads here is a configuration for reasonably secure pre-boot environement.
As replied above, Heads saves states (ephemerally) into TPM PCRs for measurements. And seals the result of those measurements into a NVRAM region. It also seals TPM Disk Encryption Key into a seperate NVRAM space, but this time with a passphrase that is needed to unseal its content, if and only if the measurements are the same as when that secret was sealed.
Once again, I would appreciate criticisms on the upstreamed documentation instead of clarifying in different forums and different forum posts. Reading this, some comment seem to come from not having read the documentation, some having not understood the documentation, and from some dreaming of what Heads could do.
hi @Insurgo , this is to continue the discussion in firmware backdoor
about recommendation to replace some peripherals in our computer,
although already flashing with heads.
so, what’s the reason behind to replace wifi card ?
how about bluetooth module, ram, ssd, harddisk ? recommended to replace too ?
is there a recommended wifi card to improve security ?
The part here relevant to Heads (coreboot, replacement of proprietary firmware or reflashing a custom firmware) is that otherwise proprietary have a tendency to lock their users into using a subset of hardware components, here on Lenovo, this is whitelisting supported wifi cards.
On proprietary Lenovo firmware, you could not replace the wifi card. When using coreboot, you can.
Why Atheros? Because they are recommended and supported FOSS devices. Thinkpenguin and others are reusing same chipsets which are RYF certified, which is the path of choosing lesser evils.
Outside of that, this is not related to Heads/coreboot.
I hope this is the right place to ask: I am considering buying a NitroPad since you don’t sell computers anymore.
They make the claim: “Thanks to the combination of the open source solutions Coreboot, Heads and Nitrokey USB hardware, you can verify that your laptop hardware has not been tampered with in transit”.
My current level of understanding is that this really mean that Heads + NitroKey will conform that the BIOS firmware has not been tampered with, but that is all.
Is this correct? It will prevent BIOS tampering?
But other firmware such as in the NIC is not measured by HEADS, so it could in theory be tampered with?