Discussion on Purism

No they don’t. The name of the video is “Tampering Detection” and never in the blog post they use prevention but evidence…

For the rest of your post, I will quickly address the points that don’t make sense to me again.

That is not what I said, and again you propose a really simplistic shortcut to prove a point. This is called straw man argument, unfortunately

Straw man

A straw man fallacy is the informal fallacy of refuting an argument different from the one actually under discussion, while not recognizing or acknowledging the distinction. One who engages in this fallacy is said to be “attacking a straw man”. Wikipedia

You could not, under any supported Heads platform supported today, do this attack with an OS flashrom hook, simply because Heads doesn’t enforce booting the system with iomem=relaxed unless you signed /boot options yourself to permit that explicitely, with you having modified grub options and then having signed /boot hash digest content prior of launching this attack. ivy/sandy/haswell platforms? You could only flash firmwre withing Heads, thanks to PR0, and I will push the ecosystem to attempt to enforce the same thing on post skylake, though that’s a coreboot task (that I don’t understand why it was not investigated more and enforced the same way possible on pre-skylake, also invested a lot of time already on that, but not a priority for coreboot devels).

This is what is bothering me @TommyTran732, along interesting points, you also inject totally technically wrong information. Try this against any corebooted laptop, you simply cannot successfully do what you claim possible, which brings FUD and distrust from readers against security premises offered by coreboot if not wrongly configured. Even libreboot, which is not security oriented, could not permit firmware tampering is such way and is documented exactly the same. Search that up, and you will destroy that argument yourself. This is loss of my time.

Then, let’s say you stupidly modified grub.cfg and priorly detached signed /boot digest permitting iomem=relaxed there. After flashing the firmware, unless perfectly crafted firmware image with my long past post points considered (that you haven’t addressed any counter point from it…), you missed the point of needed time and effort to tamper with not only bootblock, but also flashrom, other coreboot stages and the whole cryptography API of coreboot code to be successful flashing a completely valid ROM, reporting all right measurements to match exactly TCPA/TPM event log traces extending PCR2, then all Heads parts, including hardcoding all PCR details at Keys and passwords in Heads | Heads - Wiki. That’s the part i’m interested in outside of the theoritical bootblock part, which I would agree with you, if that was so simple, it would be dishonest and unethical, but the point is that it isn’t that simple. The fact that you bring invalidated theories is the dangerous fact here. How di you get not only on an unlocked computer, but got root/sudo credentials or even the TPM DUK key?!?! I don’t follow you here. This is exactly what security in depth is, this is exactly what your referred articles are, plus Purism pushes to use detached signed LUKS authentication with USB Security dongle, which your code snippet wouldn’t permit, unless again no screensaver, so basically talking about a dumb user here that volontarily deactivated all security measures enforced nowadays by default. Anyway, so you flash a firmware internally through a hook. Chances are you most probably didn’t crafted it perfectly for aforementioned reasons. On reboot, its not just about OS upgrade modifiying /boot content, but one single PCR not matching and preventing TPTM TOTP (firmware related sealed secret), which will not unseal TOTP, which cannot even be verified against phone, nor through reverse HOTP against supported HOTP USB Security dongles (same secret reversed challenged). So what are we talking about here? Let’s say you have access to the enrolled USB Security dongle. You would need to know both PINs here to reseal both HOTP and reseal TPM Disk Unlock Key, which also requires TPM ownership PIN but also the Disk Recovery Key passphrase defined at OS install for LUKS container decryption?? I don’t follow you, unless you crafted a perfectly valid ROM with hardcoded TPM extend measurements in both coreboot codebase and Heads codebase to match GPG keyring and trustdb at re-ownership time, and had access to SPI content backup. You would still have to type TPM unsealed LUKS Disk Unlock Key passphrase!!! TPM will fail unsealing both TPMTOP and TPM DUK. You could maybe pass TPMTOTP, but passing both and tampering things totally correctly is more then a feasible theoritical attack as of today, its barely a theoretical possibility, and that doesn’t justify Bootguard as an argument. Straw man argument solely.

Fleet? you just justified the only bootguard use case I agree with without counter-point. The end user is not even the owner of his machine here. And organization rents hardware: totally justified, while leaked keys still scary and blind faith related. But we already discussed that.

Well, /boot/firmware_update contains my last reproducibly built and flashed firmware image, that I can confirm today Heads CircleCI builds the same result as locally because reproducible builds resolved. That or a USB thumb drive zip file of last flashed version, if I left my laptop behind (I never do that, do you? So much depend on me I cannot afford that), then internally upgrading resolves uncertainty. But I get you don’t trust flashrom under Heads… But then you don’t trust PR0 either and any of the past granted work and collaboration and review of security experts either Nlnet past funded work placeholder for Authenticated Heads project (2022-ongoing) · Issue #1741 · linuxboot/heads · GitHub. I cannot argue against that. It’s again just faith related, just like you decide to trust bootguard instead. But I won’t fall again in that rabbit hole.

That was not the point. How are you bettering it?

And how do you trust your vendor has not had leaked keys? Faith. That is all my points. Anyway, that was my 2 cents. Can we stop looping?

OTP without transfer of ownership possibilities was proved wrong. That and closed source UEFI supply chain mess should be avoided at all cost, maybe less if fwupd supported, depending of track record. That’s it! My previous points on Dasharo is exactly the same. I just trust them more but I will never know if they leaked keys. I would choose to believe not, that’s all. And would accept the risk if I had to enforce/maintain firmware security for a fleet, way more then other manufacturers and the mess I’m aware of from firmware supply chain from ODM/OEM vendors which are bootguarded today. Them doing that change (still using proprietary tools to do so…) as the answer to your need of delegatign that trust of hardware RoT to someone, and they will anwer that call with Novacustom (coreboot+heads!). Again this toolset is available only through NDA and such, which is agai another total non-sense and pay wall etc, but is out of debate here. We are talking about oligarchy vs user control. There is no possible common grounds here. It’s either you delegate trust or you do the necessary to protect yourself. This is the world we live in. You can deny it or live with it. I choose the later.

I’m using GrapheneOS though, on a Pixel Phone. I gave up on controlling this and delegate the same trust you are talking about and I completey understand. But a phone is more volatile and the trust model is different. Have a lot to say against Google, but also a lot positive to say as well… I relatively trust that phone because of lack of other choices, thats another subject altogether. I need a phone. Replicant is not possible anymore with 3g having phased out. Trust anchors were even weaker there, work which could have improved it was abandoned. I stopped worrying, but have way less states to protect on that device than the one i’m developing on and where that work is needed by countless users which depends on QubesOS for security and Whonix for anonymity(no QubesOS is not privacy focused, that’s another topic: Whonix is. You go in all directions with questionable claims). I need to be trusted and did everything to be so. And now working toward shaking that OTP mess nobody should rely until transfer of ownership is possible, but understand need for fleet managed by a tier, where users don’t own anything in their devices anyway (NDA and invading contracts) and could, and probably are, monitored in all their actions anyway from corporate enforced security policies and legal discharges. Different topics, once again.

2 Likes

No they don’t. The name of the video is “Tampering Detection” and never in the blog post they use prevention but evidence…

You should watch the whole video. The claim is that if there is tampering the user will notice it on there and not on normal laptops.

simply because Heads doesn’t enforce booting the system with iomem=relaxed unless you signed /boot options yourself to permit that explicitely

Which can be done within the same hook. I don’t see what the difference is. Once the OS is compromised, you can inject whatever you want.

Here, if you want a more fancy version of it:

/etc/pacman.d/hooks/99-grub-tampering.hook

[Trigger]
Type = Package
Operation = Upgrade
Target = linux

[Action]
Description = Kapoot
When = PostTransaction
Exec = /usr/bin/kapoot.sh

/usr/bin/kapoot.sh

!/bin/bash

if [ -f /var/cache/kapoot ]; then
   flash_malicious_firmware
else
   sed -i 's/GRUB_CMDLINE_LINUX="/GRUB_CMDLINE_LINUX=" iomem=relaxed/g' /etc/default/grub
   touch /var/cache/kapoot
fi

exit 0

I don’t know why you are hammering on this. It’s just a distinction without a difference. The logic is quite literally the same. The second time you do a kernel update the firmware is compromised.

This isn’t the only way to do it either. It is just a poor man’s implementation I can come up within a minute. The compromised OS can literally just download a custom patched kernel instead of the distro’s kernel and set that itself without even needing to set it in grub. I sure hope you are not going to tell me to make a PoC for that kernel just to prove a point that is very obvious from the get-go.

This is what is bothering me @TommyTran732, along interesting points, you also inject totally technically wrong information. Try this against any corebooted laptop, you simply cannot successfully do what you claim possible, which brings FUD and distrust from readers against security premises offered by coreboot if not wrongly configured. Even libreboot, which is not security oriented, could not permit firmware tampering is such way and is documented exactly the same. Search that up, and you will destroy that argument yourself. This is loss of my time.

Seriously?

you missed the point of needed time and effort to tamper with not only bootblock, but also flashrom, other coreboot stages and the whole cryptography API of coreboot code to be successful flashing a completely valid ROM, reporting all right measurements to match exactly TCPA/TPM event log traces extending PCR2, then all Heads parts, including hardcoding all PCR details at Keys and passwords in Heads | Heads - Wiki.

It doesn’t need to do any of this. It just needs to wait for a second kernel update and trigger the actual override.

The logic is literally as simple as “if internal flashing against by OS is possible, and there is no signature checking of the stuff being flashed, then there is nothing stopping malware from gaining persistence in the firmware”. You can add all the annoying the kernel param requirements you want, it means nothing.

Keys and passwords in Heads | Heads - Wiki

I will have a look at this again and see. Last I remember, this was not an option with PureBoot on my Librem.

What measurements does it take? If it is the same as the HOTP/TOTP secret then the attack vector is the same.

How di you get not only on an unlocked computer, but got root/sudo credentials or even the TPM DUK key?!?!

I am not sure what you are talking about here. Are we protecting the OS against compromise, or the firmware against a compromised OS? The second attack I describe is purely for the second scenario.

I am specifically referring to this part:

One of the best ways for them to hide – and make sure they still have access between reboots — is compromising your OS’s kernel software, so that it filters out any requests to the system that might reveal the attacker’s software. Of course, you could still thwart an attacker by reinstalling or upgrading your operating system, even if you can’t see any evidence of an attack… but the attacker may also have compromised your BIOS (the first code the system runs) so that it re-infects your system after every reinstall, successfully hiding themselves even from a live OS — booted from USB. BIOS malware allows attackers to intercept, and capture, your disk encryption password as you type it in.

They are saying OS compromise can be dealt with by reinstalling, but “BIOS malware” sticks. Well, I just described one of the way it gains persistence from the OS. I have no clue how to do this against Boot Guard/AMD PSB without a serious exploit, but against this the hook would just work.

1 Like

I am not impressed by Purism for completely ignoring the following issue for their Librem5.

Also ignored in Purism forums.

1 Like

Isn’t ARM TrustZone the same as Software Guard Extensions (SGX), which isn’t the same as Management Engine.

2 Likes

Chances are you most probably didn’t crafted it perfectly for aforementioned reasons. On reboot, its not just about OS upgrade modifiying /boot content, but one single PCR not matching and preventing TPTM TOTP (firmware related sealed secret), which will not unseal TOTP, which cannot even be verified against phone, nor through reverse HOTP against supported HOTP USB Security dongles (same secret reversed challenged). So what are we talking about here?

The point here is that the malicious firmware does not need to falsify the measurement whatsoever. The boot block lying about measurements are only for the first attack I described, with physical attack.

Why did I specifically make a hook for a kernel update you think?

Because the user expect the warnings about measurements not matching due to the kernel update, they will just dismiss it as not a cause for concern and sign/accept everything. The attacker can abuse this fact to chain in internal firmware flashing along with it and the user wouldn’t know any better. The hook is literally to just chain in the internal firmware flashing with a kernel update to avoid suspicion when there are red warnings.

Bugs where the TOTP just randomly gets out of sync on innocent kernel updates happened several times before. Here are a few:

TPM Issue - Failed to unseal TPM - Hardware / Librem (Other) - Purism community

I can even add some silly hwclock --set --date to the hook to just have the TOTP fail every few kernel updates just like with the old bugs too, and delaying flashing the actual malicious firmware until the user has gotten used to it and just blindly go through the whole re-ownership process due to fatigue.

Straw man argument solely.

You are straw manning my argument. Who said anything about passing TOTP or HOTP with the second attack? It doesn’t require it. I said this multiple times.

That was not the point. How are you bettering it?

I tell people when what they are doing is security theatre. You know what? Since you brought this up, I will just literally write a script when I have free time to set up poor man’s Secure Boot, which will still provide much better protection than whatever Heads does

  • Protection against /boot tampering
  • Protection against firmware settings tampering (just pinning the encryption key against PCR 1)
  • Protection against firmware downgrade attacks (Boot Guard and PCR 0)
  • Protection against a compromised OS flashing malicious firmware (I don’t even need to do anything. Boot Guard already does the job. What doesn’t do the job is PureBoot/Heads).
  • Custom Secure Boot Key enrollment

The toolings are already provided by systemd. All it takes is a few hooks to put it all together. It is much easier and much more affordable for normal users to have on commodity hardware, with much stronger protection than the overpriced, insecure laptops.

The end user is not even the owner of his machine here. And organization rents hardware: totally justified, while leaked keys still scary and blind faith related.

What are you even talking about? I own all of my hardware, and I operate a rack of servers just for my own needs. You think only orgs have a fleet of hardware?

You go in all directions with questionable claims

What’s questionable? I said that’s what I use Qubes for, because I have another setup that better serves threat model.

NDA and invading contracts

There is no NDA/contract with the hardware I own.

1 Like

To recap:

  • Attack 1: Malicious firmware flashed by a physical attacker. The attacker can take a dump of the firmware with a programmer if they want. The boot block can lie about measurements. Yeah, it takes time and effort, but that is about it. This type of attack is only possible with an exploit or with leaked keys when Boot Guard is involved.

  • Attack 2: Malware inside the OS gaining persistence in the firmware and stick around across OS reinstalls. Malicious internal firmware flashing can happen along side another legitimate operation (like kernel, grub, initramfs update) to avoid suspicion. Absolutely no “lying about measurements” required - the user expects measurements to not match because of the legitimate operation and will dismiss the warning. The user doesn’t need to be “dumb”, they don’t need to go out of their way to set iomem=relaxed or anything like that - the malware can just do that for them. The attack is trivial, requires minimal complexity, can be done by something as crude as a package manager hook or systemd path. This type of attack cannot be done against Boot Guard without an exploit.

1 Like

Are you saying that Secure Boot is bug-free? See here.

1 Like

Not what’s being said. And if you follow the good practice of enrolling your own PK, it wouldn’t be vulnerable to PK leakages…

1 Like

That might be true and that might all be nothing and perfectly secure, however some thought process and acknowledgement of this being an issue would be appropriate.

However, on Introduction to Trusted Execution Environment: ARM's TrustZone - Quarkslab's blog I would the following image:


You can see there is,

  • A) the Embedded OS, and
  • B) the Secure OS.

The question arises, what runs by default in the TEE? Nothing? If so, that’s great. But did anyone check?

Quote:

Vulnerabilities in TrustZone itself

On the other hand, the development of an entire operating system is a daunting task that often involves many bugs. Operating systems running TrustZone are no exception to the rule. A development error leading to memory corruption in the Secure World kernel causes total system corruption in Secure World, making its security obsolete. It also totally compromises the Normal World which is accessible from the Secure World.

Finally, compromising the TEE OS can be done before it is even executed if a vulnerability is found in the secure boot chain, as has been the case several times

It also seems to be that TrustZone a really nice place for malware to persist even after a complete re-installation of the “normal” (non-TrustZone) operating system.

It might still be fine. I am just saying, this certainly warrants acknowledgement, research, consideration. Completely ignoring the topic of TrustZone seems wrong.

1 Like

I don’t think NXP provides a secure OS, but I could be wrong, the datasheet only mentions the TrustZone Address Space Controller.

I don’t think you want a phone that doesn’t use TEE, having it a part of the CPU just seems like a weaker version of something like the Google Titan M2 chip.

1 Like

No. I would appreciate if you tested Heads instead of just reporting beliefs…

This is qemu-coreboot-whiptail-tpm1 output on boot, with TPMTOTP (no HOTP here, this part is still not emulated).

[    4.376412] DEBUG: Debug output enabled from board CONFIG_DEBUG_OUTPUT=y option (/etc/config)
[    4.394445] TRACE: Under init
[    4.442100] DEBUG: Applying panic_on_oom setting to sysctl
[    4.560708] TRACE: /bin/tpmr(32): main
[    4.657700] TRACE: /bin/cbfs-init(5): main
[    4.776243] DEBUG: Extending TPM PCR 7 with /.gnupg/pubring.kbx
[    4.856752] TRACE: /bin/tpmr(32): main
[    4.911567] DEBUG: Direct translation from tpmr to tpm1 call
[    4.962287] DEBUG: exec tpm extend -ix 7 -if /tmp/cbfs.94
[    5.169580] DEBUG: Extending TPM PCR 7 with /.gnupg/trustdb.gpg
[    5.256487] TRACE: /bin/tpmr(32): main
[    5.326280] DEBUG: Direct translation from tpmr to tpm1 call
[    5.379846] DEBUG: exec tpm extend -ix 7 -if /tmp/cbfs.94
[    5.614361] TRACE: /bin/key-init(5): main
[    6.998357] TRACE: Under /etc/ash_functions:combine_configs
[    7.089798] TRACE: Under /etc/ash_functions:pause_recovery
!!! Hit enter to proceed to recovery shell !!!
[    7.357474] TRACE: /bin/setconsolefont.sh(6): main
[    7.415098] DEBUG: Board does not ship setfont, not checking console font
[    7.652789] TRACE: /bin/gui-init(645): main
[    7.688602] TRACE: /etc/functions(715): detect_boot_device
[    7.744983] TRACE: /etc/functions(682): mount_possible_boot_device
[    7.801285] TRACE: /etc/functions(642): is_gpt_bios_grub
[    7.901675] TRACE: /dev/vda1 is partition 1 of vda
[    8.039730] TRACE: /etc/functions(619): find_lvm_vg_name
[    8.205960] TRACE: Try mounting /dev/vda1 as /boot
[    8.269801] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
[    8.331458] TRACE: /bin/gui-init(319): clean_boot_check
[    8.447656] TRACE: /bin/gui-init(348): check_gpg_key
[    8.559142] TRACE: /bin/gui-init(185): update_totp
[    8.679132] TRACE: /bin/unseal-totp(8): main
[    8.781385] TRACE: /bin/tpmr(32): main
[    8.836697] TRACE: /bin/tpmr(599): tpm1_unseal
[    8.991293] DEBUG: Running at_exit handlers
[    9.029942] TRACE: /bin/tpmr(377): cleanup_shred
[    9.126974] TRACE: /bin/gui-init(254): update_hotp
[    9.164743] TRACE: /bin/gui-init(679): main
[    9.202049] TRACE: /bin/gui-init(395): show_main_menu
[   13.967078] TRACE: /bin/gui-init(614): attempt_default_boot
[   14.003165] TRACE: /bin/gui-init(23): mount_boot
[   14.053492] TRACE: /bin/gui-init(69): verify_global_hashes
[   14.082374] TRACE: /etc/functions(396): check_config
[   14.186498] TRACE: /etc/functions(587): verify_checksums
[   18.799817] TRACE: /etc/functions(500): print_tree
[   19.035544] TRACE: /bin/kexec-select-boot(8): main
[   19.087871] TRACE: /etc/functions(396): check_config
[   19.246365] TRACE: /bin/gpgv(5): main
[   19.439487] TRACE: /etc/functions(758): scan_boot_options
[   19.527789] DEBUG: kexec-parse-boot /boot /boot/grub/grub.cfg
[   19.678871] TRACE: /bin/kexec-parse-boot(5): main
[   19.747174] DEBUG: filedir= /boot/grub
[   19.808564] DEBUG: bootdir= /boot
[   19.868310] DEBUG: bootlen= 5
[   19.931705] DEBUG: appenddir= /grub
[   22.154550] DEBUG:  grub_entry : linux trimcmd prior of kernel/append parsing: linux /vmlinuz-6.1.0-21-amd64 root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro console=ttyS0 console=tty systemd.zram=0 quiet
[   22.462921] DEBUG:  grub_entry: linux initrd= /initrd.img-6.1.0-21-amd64
[   23.246316] DEBUG:  grub_entry : linux trimcmd prior of kernel/append parsing: linux /vmlinuz-6.1.0-21-amd64 root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro console=ttyS0 console=tty systemd.zram=0 quiet
[   23.556517] DEBUG:  grub_entry: linux initrd= /initrd.img-6.1.0-21-amd64
[   24.344211] DEBUG:  grub_entry : linux trimcmd prior of kernel/append parsing: linux /vmlinuz-6.1.0-21-amd64 root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro single console=ttyS0 console=tty systemd.zram=0
[   24.665755] DEBUG:  grub_entry: linux initrd= /initrd.img-6.1.0-21-amd64
[   25.472082] DEBUG:  grub_entry : linux trimcmd prior of kernel/append parsing: linux /vmlinuz-6.1.0-18-amd64 root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro console=ttyS0 console=tty systemd.zram=0 quiet
[   25.786961] DEBUG:  grub_entry: linux initrd= /initrd.img-6.1.0-18-amd64
[   26.584814] DEBUG:  grub_entry : linux trimcmd prior of kernel/append parsing: linux /vmlinuz-6.1.0-18-amd64 root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro single console=ttyS0 console=tty systemd.zram=0
[   26.901034] DEBUG:  grub_entry: linux initrd= /initrd.img-6.1.0-18-amd64
[   27.068555] TRACE: /etc/functions(587): verify_checksums
[   30.969222] TRACE: /etc/functions(500): print_tree
[   31.120977] TRACE: /etc/functions(383): read_tpm_counter
[   31.232967] TRACE: /bin/tpmr(32): main
[   31.313545] DEBUG: Direct translation from tpmr to tpm1 call
[   31.372766] DEBUG: exec tpm counter_read -ix 0
[   33.471645] TRACE: /bin/kexec-boot(7): main
[   33.677742] DEBUG: kexectype= elf
[   33.739605] DEBUG: restval=
[   33.798215] DEBUG: filepath= /boot/vmlinuz-6.1.0-21-amd64
[   33.857492] DEBUG: kexeccmd= kexec -d -l /boot/vmlinuz-6.1.0-21-amd64
[   34.158796] TRACE: /bin/kexec-insert-key(6): main
[   34.263347] TRACE: /bin/qubes-measure-luks(6): main
[   34.309960] DEBUG: Arguments passed to qubes-measure-luks: /dev/vda3 /dev/vda5
[   34.368403] DEBUG: Storing LUKS header for /dev/vda3 into /tmp/lukshdr-_dev_vda3
[   34.685829] DEBUG: Storing LUKS header for /dev/vda5 into /tmp/lukshdr-_dev_vda5
[   34.983574] DEBUG: Hashing LUKS headers into /tmp/luksDump.txt
[   35.658812] DEBUG: Removing /tmp/lukshdr-*
[   35.739006] DEBUG: Extending TPM PCR 6 with hash of LUKS headers from /tmp/luksDump.txt
[   35.812643] TRACE: /bin/tpmr(32): main
[   35.867148] DEBUG: Direct translation from tpmr to tpm1 call
[   35.911928] DEBUG: exec tpm extend -ix 6 -if /tmp/luksDump.txt
[   36.162907] TRACE: /bin/kexec-unseal-key(13): main
[   36.213664] DEBUG: CONFIG_TPM: y
[   36.254813] DEBUG: CONFIG_TPM2_TOOLS:
[   36.296320] DEBUG: Show PCRs
[   36.465515] DEBUG: PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.489281] PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.514936] PCR-02: 4C 82 95 9C AD EC 02 D7 AF 23 5D C6 F0 81 E2 6B D0 19 B7 A5
[   36.549862] PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.572143] PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.593810] PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.623400] PCR-06: 79 99 36 2F FD 42 2C 10 3F 42 BC EE 04 62 32 73 32 4D 1D 72
[   36.647045] PCR-07: 67 F0 C6 B5 C8 A2 A5 6E FB D0 14 EA 2C 09 1C 6C B4 0B C1 86

At this point, Heads prompts the user to type his TPM Disk Unlock Key, where

  • /boot content was not changed since /boot hash digests were detached signed : /boot content was verified against digest and detached signature. Machine is in the state known from the user.
  • TOTP was unsealed early at boot prior of [ 9.202049] TRACE: /bin/gui-init(395): show_main_menu, the GUI at the point permits to boot system, where no integrity validation was done yet on /boot content. If the firmware was tampered with here, the TOTP unsealing would fail, which is about PCR0-1-2-3-4-7 measurement sealing/unsealing (read initrd/bin/unseal-totp). This has nothing to do with /boot content, which integrity and authenticity is validated per pubkey fused in SPI content. You can see traces of measurements from Heads output above, where TPMTOTP unsealing op was successful at [ 8.836697] TRACE: /bin/tpmr(599): tpm1_unseal and where TOTP code was provided to user to check, and where HOTP would launch default boot entry if reverse HOTP challenge against USB Security dongle was successful at that step.
  • Then comes launching the default boot entry ,since we are at TPM DUK prompt here now. So as said previously, the prompt goes this far because /boot integrity+authenticity was verified, LUKS header backup matches what was detached signed: the TPM DUK related measurements + passphrase will then be checked against. Note that here, if you go to recovery shell, if any kernel module was loaded then when TPM DUK was sealed, secret would be unsealed. Let’s say I attempt wrong passphrase:
[ 1174.142253] DEBUG: tpmr unseal 3 0,1,2,3,4,5,6,7 312 /tmp/secret/initrd/secret.key <hidden>
[ 1174.296379] TRACE: /bin/tpmr(32): main
[ 1174.344488] TRACE: /bin/tpmr(599): tpm1_unseal
[ 1174.501948] DEBUG: Running at_exit handlers
[ 1174.528497] TRACE: /bin/tpmr(377): cleanup_shred
[ 1174.605995] DEBUG: tpmr: exited with status 29
[ 1174.643426]  *** WARNING: Unable to unseal LUKS Disk Unlock Key from TPM ***

Let’s say I type good TPM DUK passphrase:

[ 1280.251212] DEBUG: tpmr unseal 3 0,1,2,3,4,5,6,7 312 /tmp/secret/initrd/secret.key <hidden>
[ 1280.475393] TRACE: /bin/tpmr(32): main
[ 1280.545405] TRACE: /bin/tpmr(599): tpm1_unseal
[ 1280.704137] DEBUG: Running at_exit handlers
[ 1280.747736] TRACE: /bin/tpmr(377): cleanup_shred
[ 1280.843792] DEBUG: Extending TPM PCR 4 to prevent further secret unsealing
[ 1280.961605] TRACE: /bin/tpmr(32): main
[ 1281.052794] DEBUG: Direct translation from tpmr to tpm1 call
[ 1281.125280] DEBUG: exec tpm extend -ix 4 -ic generic
[ 1283.242338] TRACE: /bin/kexec-boot(7): main
[ 1283.459100] DEBUG: kexectype= elf
[ 1283.527463] DEBUG: restval=
[ 1283.598185] DEBUG: filepath= /boot/vmlinuz-6.1.0-21-amd64
[ 1283.661199] DEBUG: kexeccmd= kexec -d -l /boot/vmlinuz-6.1.0-21-amd64
[ 1284.115952] DEBUG: eval kexec -d -l /boot/vmlinuz-6.1.0-21-amd64 --initrd=/tmp/secret/initrd.cpio --append="root=UUID=8c44b114-b625-440b-a708-177a6b510152 ro console=ttyS0 console=tty systemd.zram=0 console=ttyS0 console=tty systemd.zram=0 "
  • And then Heads pass initrd.cpio to next OS through kexec call, overriding crypttab to point to secret.key so that decrypt key passphrase is typed the most reasonably trusted environment from end user owning the keys, having sealed firmware states and detached signed /boot content:
[ 1412.542100] TRACE: /bin/tpmr(32): main
[ 1412.621136] kexec_core: Starting new kernel

Then let’s play your game here and unmix concepts a little…

  • So we take for granted someone has access to booted OS here, or access the drive to modify /boot content in the goal of flashing firmware with iommem=relaxed (which wouldn’t change a thing for sandy/ivy/haswell platforms because platform lock (PR0) which only permits Heads to flash firmware, which as said previously can be safeguarded with authenticated Heads). So here, we modified grub.cfg in a previous access to booted machine already, and flashrom hook was successful injecting config override, trustdb and keyring as also previously said, while the firmware you talk about was perfectly crafted to contain exactly the same kernel modules (PCR5), keyring and trustdb and user config (PCR7). This is stretched thin, really, and you talk about a synchronized compromise of OS (2 boots needed: one to modify grub.cfg and another one running your hook of flashrom with you coming back and providing a totally perfectly crafted firmware image which will pass coreboot measured boot PCR2 extended measurements sealed as part of TPMTOTP and PCR7 containing needed trustdb+keyring+cofnig overrides in to be flashed firmware here to be successful. In all case this is what would happen on next boot, with firmware perfectly tampered with (which you again oversimplify as being jsut bootblock related and you is what I keep repeating here to not be so trivial but decide to straw man to prove a point). This would happen on first reboot OS compromise to have iommem=relaxed in grub.cfg to flash your provided perfectly crafted rom image, simulated just by upgrading my debian-12 qemu drive, where re-ownerhip has been applied with canokey virt OpenPGP smatrcard, vTPM enrolled which you could play with to prove your point technically and would be listened by all. Note that I agree with you (other then your oversimplification pointing to bootblock only needing to be tampered with):

That would fatigue TOTP only users, yes, not HOTP. Where if Time is off, there are helpers to sync NTP over network or manually, where again, if computer is used daily, OS deals with NTP sync and time being off should not happen often but if laptop unused for long period of time. Again: HOTP unaffedted and would go to default boot if /boot not updated with recent OS update. Again, QubesOS requiring user to enforce manually and Heads users pretty used to only update dom0 when they can reboot and sign /boot right after, which also validates their USB Security dongle is still under their control because they validate tey can sign /boot boot content. It is not recommended to update firmware and OS at hte same time and FWUPD not yet pushed forward, but we should expect those firmware files to land under /boot as my use case talked about previously, which is part of hash digests and also detached signed, so would be part of the red error screen above. Agreed that changes in those files should be verified, and there are issues opened to discuss possible implementations, but there is no consensus there yet. Once best practices are decided, it will be implemented; if OS decides to implement signing hooks, then Heads will just apply/adapt that verification scheme as well.

I have a fleet myself, but not big enough to justify BootGuard. Yours do?

No NDA/accepted surveillance contract clauses on hardware you manage, still renting your hardware until EOL from vendor because BootGuarded.

2 Likes

I would appreciate if you tested Heads instead of just reporting beliefs…

I had an actual Librem (I can still ask to borrow it from the friend I gave it to again). I know the behaviors are supposed to be slightly different between kernel updates and firmware updates, but stuff like TOTP or even HOTP still got out of sync quite frequently and fatigue becomes a thing after awhile.

Plenty of these cases for HOTP aside from the TOTP stuff I linked above on the forums too:

L14: Invalid HOTP after update - Firmware / PureBoot/Heads - Purism community

HOTP: error checking code - Hardware / Librem (Other) - Purism community

Where if Time is off, there are helpers to sync NTP over network or manually, where again, if computer is used daily, OS deals with NTP sync and time being off should not happen often but if laptop unused for long period of time.

No, you can just chain in a systemctl stop chrony or systemctl stop systemd-timesyncd right after the hwclock command to prevent resync until next reboot, to induce fatigue.

Also, what’s stopping an attacker from using the same strategy to cause fatigue with HOTP?

Plenty of ways to cause fatigue, and since the “normal experience” involves so many sync problems that if the user isn’t already getting tired with it the compromise OS can ramp it up before doing the flashing, so there are very little differences.

Unless the user is super duper vigilant and manually takes external dumps whenever they see a warning, the malware can just mess with HOTP/TOTP at random intervals until they give up doing so, thinking its just yet another bug. Then the malicious firmware can be flashed after.

still renting your hardware until EOL from vendor because BootGuarded.

No, it goes EOL when the CPU stops getting microcode updates. Doesn’t matter what firmware I use, if Intel stops giving firmware updates, the device is EOL.

1 Like

As an owner of Librem 14, I don’t see any sync problems with Librem Key, except if I skip the check three times in a row myself (and this is by design). The forums are full of people with problems for any hardware, which says nothing about actual statistics.

This is an argument for FLOSS firmware, not in support of more closed stuff.

1 Like

This is an argument for FLOSS firmware, not in support of more closed stuff.

How? The reality of the matter is, it doesn’t matter if you have FOSS firmware or not, it doesn’t matter if you “control your keys” or not - once the microcode stops getting released its EOL.

Vendors like Dell release firmware update until the CPU goes EOL. Updates is not a problem if you don’t buy products from horrible OEMs, open source or not.

1 Like

IDK what to tell you man. Mine got out of sync quite frequently. But even if I grant you that your specific unit isn’t having sync problems, what are you gonna do when you see the mismatch warnings? Will you go…

  • Hm, last boot it was working fine, and I haven’t left the table since last boot. This must be one of those bugs described on the forum. Oh well, lemme just go through the reownership process?
  • Hm, lemme take a dump of the firmware and check… Then realizing that the firmware has not been tampered with. Then after a few times, you just get tired and go through the reownership process without thinking anymore? Remember, the malware can induce fatigue by messing with your TOTP/HOTP before actually doing the flashing.

It basically reaches a point where, unless you do external flashing of the firmware everytime you install a new OS, you’ll never know for sure. Meanwhile, if you have an immutable RoT, there would be no need for such things.

1 Like

I think you refer on this, which was fixed (merged) Apr 23 2023?

So if I get downstream discussions that never went upstream and for which a fix was done, you and the community is barking at the wrong tree for since:

  • They don’t understand what TPM/HOTP counters are
  • Users want security of remote attestation, but wanted, by misuderstanding, to not have to plug HOTP at boot prior of USB booting? And then, by not connecting their HOTP dongle to do the remote attestation, complained that the counters were not synced anymore? And you pick on this to justify unjustified cause fatigue bu Heads? Do I understand train of thoughts correctly?

I am not sure how to deal with this other then what was fixed already, but again to have people discuss about their concerns upstream. There was an open discussion on this here HOTP mismatch when booting computer without HOTP USB Security dongle 5 times in a row · Issue #1648 · linuxboot/heads · GitHub where I think it could be followed up and concerns raised again at Make TOTP (+ HOTP) verification possible **without OS** installed · Issue #1651 · linuxboot/heads · GitHub to be acted upon. Let me summarize quickly:

  • HOTP now attempts boot automatically if HOTP dongle connected early at boot, this means whole code debug trace from previous post happens (behind the scene, debug+trace is off by default, just showed to make it clear that if you wanted to understand how things work: you could, but somehow refuse refuse to), and the user need to press a key within 5 seconds to boot from USB or do anything not going default boot. This is what was decided to both increment HOTP counter (under /boot) which if HOTP dongle not plugged in 5 times (not 3) would have had fell out of sync (not anymorr: counter not incremented but if dongle plugged in, I want to deprecate that counter altogether: but there is no consensus yet) . As I asked everywhere I seen this criticism, I will ask again: Why would you want security but still decide to not plug HOTP dongle at boot on a device that contains states you want to protect, but then complains that states could not be verified and that device now complains that states could not be verified?! I don’t follow. But this was fixed: HOTP counter only increments when plugged in now, therefore HOTP counter is somewhat useless today and next steps are to be figured out, TPM counter also questionned upstream where these discussions should happen.

TLDR: plug you HOTP dongle at boot, every boot, have HOTP remote attest the whole bootchain, but then boot from USB if you want? What is the issue there?

Then address TOTP “fatigue”, I guess here you mean bad hardware part: a really old CMOS battery is making system clock drift between really long non-networked boot session (NTP is automatic on most OSes…)? Then again would a simple UX permitting you to sync time would resolve the issue Alexgithublab: change time (superseeds #1730) by tlaurion · Pull Request #1737 · linuxboot/heads · GitHub ? It should. Otherwise, what are you asking for here? What black magic could be done here? As said TOTP requires time to be in sync. Syncing time doesn’t change states, just system clock. This is in the realms of hardware issues we take for granted resolved; until we require system clock to be in sync. Actually, anyone needing security will want system clock to be in sync anyway prior of booting their machines otherwise logs don’t mean anything. Most want security, but don’t know what security actually means. This is where Heads shifted from being used by security experts to now general public. That’s a new challenge. Will yo be part of the solution or the problem @TommyTran732 ?

TOTP/HOTP TLDR: HOTP doesn’t require system clock to be in sync. TOTP does. Heads permits to sync time with network (ethernet, tether) and a PR is there to do this manually with minimal user interaction. Fatigue? Please explain.

On /boot changes: Normal OSes will update everything at once, including kernel/initrd/grub/xen. QubesOS is doing this upon user request and separates dom0 (only one changing /boot) so that the user is in control of knowin when something changed /boot a little more right. There is also an issue to discuss implementation, actually a lot of implementation suggestions were proposed along the years, the latest issue discussing this is Provide OS integration to sign /boot and/or root during updates · Issue #1615 · linuxboot/heads · GitHub. Please participate upstream.

TLDR there: to have /boot integrity verification without involving user signing /boot content (the fatigue you were really talking about outside of HOTP/TOTP fatigue), OS distributions would need to sign /boot (kernel, initrd, xen) then users would still need to sign grub config (or booting parameters: the hook you do under OS could as easily or even more easily be done through grub.cfg overrides, this is a security issue touching everyone). I suggested Consider including git for /boot and root fs integrity and changes reporting · Issue #1599 · linuxboot/heads · GitHub but my idea was challenged. I still think it would be more then interesting to answer the “what changed” question, but end users want a binary answer. IF that’s the case, we are barking at wrong tree and OS distributons need to sign everything. this is going to be Unified Kernel Image (UKI), with the good bad and ugly coming with it @TommyTran732 .

I’m sorry but the fatigue you are talking about is invalid at best.

  • Boot with HOTP dongle inserted, interact within 5 seconds to say you want to USB boot, have counters in sync.
  • Lazy and HOTP USB Security dongle far in backpack? You had 5 boots (not incremented anymore but if HOTP plugged in and validated) tolerance without it needing to be plugged, by design. This is not enough? Revise your opsec. I’m sorry end user, but you’re playing dumb here. You cannot have convenience > security and 5 boots without dongle connected is exactly that. Lazyness is already taken into consideration here. We could push it farther but automatic boot with HOTP resolves already that. There is not really a good compromise here so that security focuesed users AND lazy end users are satisfied. Both cannot coexist with sane defaults. Ideally we push lazy to better opsec. Not the other way around. Disagree?
  • You lost HOTP? Rely on TPMTOTP. It doesn’t match when you need it? Fix system clock as instructed. This is causing fatigue because you have to do it too often? I’m sorry end user, but you’re playing dumb here. CMOS batteries not eternal. If you play the “i need air gapped machine” and don’t expect to set system clock manually and don’t understand that machiens are bad at keeping clock in sync without external source, Alexgithublab: change time (superseeds #1730) by tlaurion · Pull Request #1737 · linuxboot/heads · GitHub should help you if you refuse to sync time with ethernet cable or through smartphone tethering. Not sure what else could be done here. Suggestions welcome as usual, but I have no trick left in my hat…

But again none of this justifies bootguard i’m sorry

it was 5. Now HOTP increments only when connected since Apr 23 2023 Fix HOTP verification logic (and counter increment) in gui-init and oem-factory-reset scripts by tlaurion · Pull Request #1650 · linuxboot/heads · GitHub

To be frank, I never boot without HOTP dongle connected on HOTP enabled devices. Being security focused (and not lazy when it comes to respecting security mechanisms in place made to protect myself against odds) I never reached this “lazy respecting” code path when it existed (again, HOTP counter NOT incrementing but when dongle plugged in for more then a year) . And I actually never got out of sync since 2018 but on TOTP-only devices (no HOTP dongle) , ONLY with customers receiving delivered machines that took more then 3 weeks of shipping… which ethernet NTP/Smartphone tethering now resolves, and now there is easy UX improvement that will land under Alexgithublab: change time (superseeds #1730) by tlaurion · Pull Request #1737 · linuxboot/heads · GitHub once merged.

What else, constructively speaking, is needed/missing? What “fatigue” are we still talking about (today Heads status) on a FOSS project that can be improved upon problem understanding aimed torward implementing a solution that would work for its users (voiced users)? I don’t follow those “living room” discussions supposed to “resolve” any problems outside that living room. Can somebody explain to me once and for all how things change in the real world for you with those isolated living room discussions not echoing in the outside world? Who created the changes talked in that isolated living room? How is change created?

@TommyTran732 Those are just stipulations. We’re mixing things again. Last time I checked, and generated a lot of discussions again here in this forum, where I know you disagree again, is that even if firmware updates from constructor may happen up until chipset/cpu EOL, the quality of those updates are uneven at best because closed source UEFI supply chain. Not a belief, a fact, not to be argued here: out of scope. Implementations uneven, security policies implemented: uneven. Choosing a “good” vs “bad” constructor also faith based until proven otherwise. We have to stop those loops, read again and follow the rabbit Low Level PC/Server Attack & Defense Timeline — By @XenoKovah of @DarkMentorLLC

Even on EOL CPUs/chipsets, where newer CPUs/chipsets implement newer extensions which are sometimes more vulnerable then older ones which are now EOL, FOSS firmware continues to support those machines, which were better built, supported right to repair movement before glueing everything became a thing and none of those platforms are yet again exposing vulnerabilities requiring them to be thrown away even today AFAIK, if combined with proper opsec while using them. This was also covered in many many other posts, including a contribution at Heads Threat model | Heads - Wiki. The point here, even today, is that QubesOS users, if enforcing proper opsec (read AnarSec | Qubes OS for Anarchists) , most specifically OPSEC for memory use: principles will be unaffected by those EOL and can still today boot their computers without having to do a prayer/incantation before pressing their power button. Again, threat model, faith and trusting “security” made by others for end users trusting that whose implementations were done in good practices and good faith which proved to not be so great. Choices again. All we have is choices. without trusting but inspecting, this trust translates to pure faith. This is a fact, that you like it or not; this is not a place to argue this either, there are plenty of other threads just here for that and this is unrelated to Purism hardware, Pureboot/Heads, but related to microcode at best, closed source UEFI supply chain, BootGuard/OTP and/or lack of transfer of ownership technologies, CPU extensions, SMM/SMI, VT-D/IOMMU/TXT, SRTM/DRTM. But attempting to mix those all in and just claiming one is better then others while refusing to be part of security experts discussions is solely a “living room” discussion and nothing else. @TommyTran732 you seem knowledgable enough, interested enough to change things outside of this: please use your time where it could change things and impact the future of how things are manufactured/made/understood/improved. There is no doubt you have the intelligence, your gut feelings seem acute: choices of what you do with your understanding/voice/time is my concern here. Will you choose to leave the living room and voice your concerns to the outside world and be part of the solution here or will you stay satisfied with bootguard forever?

2 Likes

Moderator comment

This has become a bit of a heated topic (understandably so) but has also deviated a lot from how it affects Qubes users. Sure, we could conser boot security is critical to Qubes and thus it bellongs but we have to draw a line somewhere.

Would it be possible to shift focus back to the manner in which Purism as a company has affected the Qubes community? or some other way that is Qubes-relevant.

Thank you!

4 Likes

I think you refer on this, which was fixed (merged) Apr 23 2023?

Could be. I haven’t used it in awhile now.

Why would you want security but still decide to not plug HOTP dongle at boot on a device that contains states you want to protect, but then complains that states could not be verified and that device now complains that states could not be verified?! I don’t follow.

Because humans are not perfect and sometimes they just forget their key or don’t have time for it, or whatever. Same reason nobody checks for the nail polish on every boot.

HOTP counter only increments when plugged in now, therefore HOTP counter is somewhat useless today and next steps are to be figured out

Okay, what’s stopping the malware from just rolling the counter back to what it was the previous boot or just messing with it in general, causing the red warnings randomly? Then the user checks the firmware, see that the firmware has not been tampered with, get tired after a few times, and just ignores it when the malicious firmware gets flashed?

system clock drift between really long non-networked boot session (NTP is automatic on most OSes…)?

The malware can cause the clock drift itself by stopping the timesync service and do hwclock --set… This is trivial.

systemctl stop chronyd
systemctl stop systemd-timesyncd
hwclock --setepoch 0 

Syncing time doesn’t change states, just system clock.

That is the point. The malware can just mess with the clock, let the user do the whole firmware verification over and over until they get absolutely sick of it and think “Oh it’s just the RTC having issues again” before flashing.

Then again would a simple UX permitting you to sync time would resolve the issue Alexgithublab: change time (superseeds #1730) by tlaurion · Pull Request #1737 · linuxboot/heads · GitHub ? It should.

So say I reset a user’s hardware clock to epoch 0 for like 50-100 times by hooking to before actually flashing the malicious firmware. You think they aren’t going to just give up doing the whole excercise by then? I can just hook into shutdown.target or reboot.target and do this.

Actually, anyone needing security will want system clock to be in sync anyway prior of booting their machines otherwise logs don’t mean anything.

Does that sound like a reasonable thing to expect a user to do every boot or every few boots on their daily driver? Or will they get sick of doing it like I said?

Will yo be part of the solution or the problem @TommyTran732 ?

I don’t even see how it could work realistically.

TOTP/HOTP TLDR: HOTP doesn’t require system clock to be in sync. TOTP does. Heads permits to sync time with network (ethernet, tether) and a PR is there to do this manually with minimal user interaction. Fatigue? Please explain.

  • Again, what is stopping the malware from messing with the HOTP in /boot?
  • Is it realistic to expect ethernet + tether everytime a laptop boots?
  • What will the user think if they see HOTP/TOTP just randomly fails after they reboot? “Oh, it’s one of those software bugs again”? or “Oh, I must be compromised, lemme take a dump and check”? And are they gonna do that over and over or just give up after a few times? There is a limit to how much humans can handle this.

TLDR there: to have /boot integrity verification without involving user signing /boot content (the fatigue you were really talking about outside of HOTP/TOTP fatigue), OS distributions would need to sign /boot (kernel, initrd, xen) then users would still need to sign grub config (or booting parameters: the hook you do under OS could as easily or even more easily be done through grub.cfg overrides, this is a security issue touching everyone). I suggested Consider including git for /boot and root fs integrity and changes reporting · Issue #1599 · linuxboot/heads · GitHub but my idea was challenged. I still think it would be more then interesting to answer the “what changed” question, but end users want a binary answer. IF that’s the case, we are barking at wrong tree and OS distributons need to sign everything. this is going to be Unified Kernel Image (UKI), with the good bad and ugly coming with it

Okay, so how are you going to stop a downgrade attack with this? What is stopping a physical attacker from doing a git reset --hard in your git repo from a year ago, then put in a 1 year old UKI?

There are several ways downgrade protection can be provided with the setup Boot Guard + UEFI Secure Boot can provide (like db override, tpm policy or just pinning/unpinning PCR4), but idk how the git strategy would work.

Lazy and HOTP USB Security dongle far in backpack? You have 5 boots tolerance without it needing to be plugged, by design. This is not enough? Revise your opsec. I’m sorry end user, but you’re playing dumb here. You cannot have convenience > security and 5 boots without dongle connected is exactly that.

Yeah, but then the Boot Guard + UEFI Secure Boot doesn’t have this problem. When it trips, it always mean that something has legitimately gone wrong. Anyhow, this is not the main point I was making.

You lost HOTP? Rely on TPMTOTP. It doesn’t match when you need it? Fix system clock as instructed. This is causing fatigue because you have to do it too often? I’m sorry end user, but you’re playing dumb here.

Choosing a “good” vs “bad” constructor also faith based until proven otherwise.

Not exactly. Yeah, its a lot of work to verify whether stuff like image libraries are outdated and what not and I don’t actually do it in practice. But I do check for stuff like Secure Boot tripping, whether a change in firmware settings are reflected correctly in the measurements or not, whether the “allow BIOS downgrade” toggle work or not, etc and etc. It is a 1 time thing so it’s not that big of an investment.

will be unaffected by those EOL and can still today boot their computers without having to to a prayer before pressing their power button.

How do you even get full spectre mitigations on Hashwell/Ivy Bridge? And that is just 1 example.

choices of what you do with your understanding/voice is my concern here

Well I am writing the script for the secure boot strategy I described a few post ago.

Strategy for now is this:

  • PIN the encryption key against PCR 0, 1, 2, 3, 4, 5, 6, 7, 14.
  • On UKI update, wipe the TPM. Reseal the encryption key against PCR 0, 1, 2, 3, 5, 6, 7, 14. (PCR 4 is unpinned temporarily to let the next boot go smoothly).
  • Immediately after reboot, wipe the TPM and reseal against PCR 0, 1, 2, 3, 4, 5, 6, 7, 14.
  • Similar behavior for firmware updates as well, but unpinning PCR 0 and 1 instead of 4.

Custom PK, KEK, and DB enrollment is expected. No Shim, OEM PK, Microsoft PK, OEM KEK, Microsoft 3rd party CA, or anything like that is involved. Only the Boot Guard key and the user’s own PK are trusted.

This is still crude, and the it can be better be implemented with proper PCR policies or leveraging Secure Boot db updates instead of PCR 4.

It shouldn’t be that hard to write, as the toolings are already provided. I just need some free time. And the way I see it, it already provides much stronger protection than Heads/PureBoot:

  • No annoyances to the user.
  • No risk of malware gaining persistence in the boot firmware short of an exploit.
  • Stronger protection against physical attacks (cuz Boot Guard).
  • Downgrade protection (not saying that Heads doesn’t have it now, but I do not see how the .git strategy would work).
  • If it trips, it definitively means something is wrong and warrants a full OS reinstall.

I advocate for setups at least with this strength or better. Like I said, I am not attached to Boot Guard, and I am willing to go with something else if it is objectively more secure. My problem is not with Heads in general, but Purism advertising their stuff as “high security” and “better than normal laptops/macbooks”.

1 Like

Ok seriously, I give up. discussions upstream.

HOTP counter should be in sync otherwise there is tampering. Time needs to be in sync otherwise logs don’t mean anything. HOTP not being in sync, time needs to be in sync to check TOTP.

Helper merged today. UX friction reduced to minimal at Alexgithublab: change time, 3.0 (supersedes #1737) by JonathonHall-Purism · Pull Request #1748 · linuxboot/heads · GitHub with another superb collaboration with your devil (Purism) at Alexgithublab: change time, 3.0 (supersedes #1737) by JonathonHall-Purism · Pull Request #1748 · linuxboot/heads · GitHub.

You refuse to test this under Qemu, I refuse to chat here with you. Nothing you say makes sense to me here.

At this point I just feel trolled and won’t participate in your echo chamber.

Be in peace with BootGuard, @TommyTran732.

2 Likes

HOTP counter should be in sync otherwise there is tampering. Time needs to be in sync otherwise logs don’t mean anything. HOTP not being in sync, time needs to be in sync to check TOTP.

And what happens when the malware just messes with the HOTP but not actually tamper with the firmware for a few times to cause fatigue? The malware is free to cause both HOTP and TOTP to randomly fail, at something like a reboot to further reduce suspicion.

You refuse to test this under Qemu, I refuse to chat here with you. Nothing you say makes sense to me here.

Of course, you are in denial. What can I say?

At this point I just feel trolled and won’t participate in your echo chamber.

Sure go ahead. It’s not like you will actually be realistic and think about malware causing fatigue to flash the actual malicious the firmware later anyways. You are deep in ideological land and not being realistic.

3 Likes