You made a “point” by literally just spamming irrelevant posts that has absolutely nothing to do with the topic at hand. Nothing you said makes remotely any sense.
I don’t see any difference between unintentional vulnerability in hardware and malicious actions of vendor. How can anyone differentiate them without a way to verify it?
Was this malicious or just someone “forgot” to disable this “feature”?
The difference is that for an unintentional vulnerability, you can at least assume that the other components are not filled with backdoors, and that patches that come from the vendor are going to address the issue in some form.
Again, if you think Intel is so malicious that they will go out of their way to deliberately build hidden circuitry so that the AMT will activate when you load some website and stab you in the back, you have to assume that everything they ship has a backdoor in some form and the only mitigation is not to use their CPU.
The same logic applies to the link you sent just now (which describes a complex exploit chain against the kernel and Safari):
- If this is an unintentional bug, you can at least rely on Apple to patch Safari and the kernel.
- If you think Apple deliberately built this as a backdoor, you have to think - are they gonna backdoor the entire SoC, firmware, and operating system? How can you possibly trust any of their product? There is no mitigation against this aside from not using their hardware.
With unintentional vulnerability it’s not safe to assume that other components don’t have vulnerabilities as well. It’s just that the probability of compromise that would require multiple vulnerabilities used together would be lower.
Then again, I don’t want to have my safety based on just some unfounded trust.
If I’m able to assume some possible threats from vendor hardware being vulnerable/malicious and I’m able to take some measures to prevent these threats then why shouldn’t I do it?
If I’m unable to prevent some threats or at least lower the probability of their success to an acceptable level only then it’d come down to whatever I’m willing to accept this threat and still use the hardware or not.
With unintentional vulnerability it’s not safe to assume that other components don’t have vulnerabilities as well. It’s just that the probability of compromise that would require multiple vulnerabilities used together would be lower.
No one said that you can assume that other components don’t have vulnerabilities. That’s why you still need to do attack surface reduction like disabling AMT, disabling Hyperthreading, enable SMM mitigation, and so on. But you don’t have to assume that they have literal backdoors in them.
Then again, I don’t want to have my safety based on just some unfounded trust.
Well you are basing it off of irrational fear of a specific vPro feature which can be disabled anyways while losing out on actual security features vPro provides like TME-MK, Key Locker, TXT, SMM mitigation, and so on.
If I’m able to assume some possible threats from vendor hardware being vulnerable/malicious and I’m able to take some measures to prevent these threats then why shouldn’t I do it?
You can do that against things like PCIe connected devices by using IOMMU + Kernel DMA protection. You can do that against USB devices by passing through the entire USB controller to a seperate VM. You can’t do that against a CPU. There is no system or strategy known to mankind that can protect you from a malicious CPU or its vendor.
If I’m unable to prevent some threats or at least lower the probability of their success to an acceptable level only then it’d come down to whatever I’m willing to accept this threat and still use the hardware or not.
You are not actually preventing any threats by making up scenarios like Intel going out of their way to make an AMT backdoor. Reducing the threats mean attack surface reduction, it means disabling firmware features you do not need like AMT while taking advantage of other features like TME-MK.
Making assumptions like “Intel is malicious”, then not actually protecting yourself from Intel (because it is impossible) while losing on actual security features because you avoid vPro is what we call “security theatre”.
This reminds me of State considered harmful - A proposal for a stateless laptop (new paper) | The Invisible Things Blog
@TommyTran732 I believe you might learn from this link how you can use a hardware without trusting the vendor and that it might be a reasonable approach to improve your security, at least in theory.
Proprietary, non-verifiable (and Intel is actively fighting with attempts to understand it), non-reproducible software, with unlimited access to your computer tells you it’s disabled. Do you seriously trust that? Even if you do, don’t know see that security through obscurity is a flawed approach and Intel should do better?
Like I’ve said before:
Where it’s possible you should reduce the attack surface and use all the security features available. But don’t have blind faith in them and still keep in mind that they could be vulnerable (maliciously or not) and implement some additional protection where it’s possible and needed.
I disagree that you can’t do anything at all against malicious CPU.
While you can’t isolate the CPU to mitigate the threat like with other devices, you can protect against the threat in other ways that would suffice in certain cases.
Lets consider the previous example with encryption key:
You have PC that works with big database on the encrypted disk.
You have network data leak protection set up so nobody would be able to get this data over network unnoticed.
The working place is protected against unauthorized access so nobody would be able to tamper with hardware unnoticed.
But the place is not protected enough so that nobody will be able to access it.
Someone can break in and steal the disk.
You want to protect the data on the disk from any kind of threat. Including CPU vendor backdoor.
If you decrypt the disk on the same PC then the encryption key could be leaked and if the disk then stolen the info on disk could be accessed.
In this case you can move the decryption process to the separate machine and access the data on this disk over network.
This way you can protect the data even from backdoored CPU if you consider the working PC as untrusted.
If this is the threat, you want vPro because an attacker can just extract the encryption key from the RAM of the running system otherwise. Like I don’t know why you don’t even think about TME-MK for a second when it is one of the solutions to protect the disk encryption key while the computer is running.
You want to protect the data on the disk from any kind of threat. Including CPU vendor backdoor.
You can’t protect it against the CPU, duh.
In this case you can move the decryption process to the separate machine and access the data on this disk over network.
Yeah, then the attack is just getting access to the network mounts using the credentials in the running system and that’s about it.
On top of that, you introduce extra attack surface by adding both the server and client software to do the network mount. You also need to care about the security of the network mount too.
You are literally advocating for security theatre right now. This is like as theatric as it can possibly get. Network mounts have a lot of use cases, but “improving security” isn’t one of them.
This way you can protect the data even from backdoored CPU if you consider the working PC as untrusted.
Nope, not how anything works.
@TommyTran732 I believe you might learn from this link how you can use a hardware without trusting the vendor and that it might be a reasonable approach to improve your security, at least in theory.
You should read the link you sent in full. You might learn that it absolutely does not protect against a malicious CPU. Not even in theory. You know why? Because it is impossible. If anything, the link talks about how the CPU itself has fuses and no flashable memory.
Also, in one of the links Joanna referenced, she literally said:
One thing that bothers me with all those divagations about hypothetical backdoors in processors is that I find them pretty useless in at the end of the day. After all, by talking about those backdoors, and how they might be created, we do not make it any easier to protect against them, as there simply is no possible defense here.
You should also read what I actually said: there are some protections against certain components (and by extension, their vendors) in the system, but you cannot protect against a malicious CPU.
Proprietary, non-verifiable (and Intel is actively fighting with attempts to understand it), non-reproducible software, with unlimited access to your computer tells you it’s disabled. Do you seriously trust that? Even if you do, don’t know see that security through obscurity is a flawed approach and Intel should do better?
This is a ridiculous statement. No one here is advocating for security through obscurity. Where is the “actively trying to fight attempts to understand it” stuff coming from?
I am advocating for REAL, actually useful security features like TME-MK, Intel Key Locker, TXT, etc.
By that logic, you should be railing against CPU microcode too cuz “hurr hurr proprietary bad”.
Yes, that is the part I don’t understand.
AMT is only usable on specific systems with a vPro enabled CPU and chipset, you need both, it’s not enough to just have the CPU.
It is typically only found in business hardware, e.g. some laptop series and Lenovo or Dell workstations, you don’t really find it in consumer hardware.
On the other hand, you have microcode in ever single CPU sold, from both AMD and Intel, and it would very likely be able to introduce transient execution vulnerabilities, that potentially could be exploited remote, similar to how it was possible with meltdown.
I doubt anyone would say they are not applying microcode updates, because it’s too dangerous.
Yes, that is the part I don’t understand.
Of course you don’t understand it because it doesn’t make any sense haha.
It is typically only found in business hardware, e.g. some laptop series and Lenovo or Dell workstations, you don’t really find it in consumer hardware.
Yup.
On the other hand, you have microcode in ever single CPU sold, from both AMD and Intel, and it would very likely be able to introduce transient execution vulnerabilities, that potentially could be exploited remote, similar to how it was possible with meltdown.
That’s what a lot of people conveniently ignores
I doubt anyone would say they are not applying microcode updates, because it’s too dangerous.
You will be surprised. Look at the FSF and their linux-libre. Not only do they block microcode updates, they also surpress the kernel warning saying that the system is vulnerable because it might “trick” users into trying to load in it in. It truely is bonkers and yes there are people crazy enough to do this.
I’ve already said multiple times:
I’d use TME-MK but I wouldn’t place my full trust in it and try to prepare some additional protections in case it would be vulnerable.
Like setting up a system to power off the machine in case of unauthorized access to the room.
The point is that it’ll reduce the amount of leaked data before the leakage is noticed and stopped.
If attacker gets the remote access to this PC then it’s possible to somehow leak the encryption key and stay unnoticed but it’s impossible to leak the full database that could be many TB size unnoticed.
I don’t see how it’ll add attack surface. I’m talking about two PC connected to one another using e.g. fiber optic and one of them is being network disk and another working PC that would boot from this network disk.
I’m not talking about booting or using network disk from same LAN with multiple other PCs.
I don’t see how it’ll add attack surface. I’m talking about two PC connected to one another using e.g. fiber optic and one of them is being network disk and another working PC that would boot from this network disk.
I really don’t see how this helps at all.
Do you mean the whole setup is not helping protecting the encryption key or what?
If you decrypt disk on working machine and it’ll be compromised - the encryption key will be leaked right away.
If you decrypt disk on network disk machine and working machine will be compromised - the encryption key won’t be leaked right away because working machine don’t have it and attacker need to compromise network disk machine as well to get it.
- You have a magical CPU (which you think is backdoored) on the network storage server you have to worry about anyways.
- What would you do if the strategy is to slowly export the database over time?
- Where will you be storing the network disk?
I really don’t see how this entire exercise is not theatre.
You can have CPU with another architecture/vendor like ARM and attacker will have to have backdoors in two architectures/vendors to compromise the system, which would decrease the probability of compromise.
It’s easier to trigger backdoor on working machine where it could potentially execute remote code with javascript compared with triggering backdoor on network disk machine using networking or e.g. iscsi protocol.
It depends on how the data leak detection software will be configured. I’m not experienced in this area but I’d assume that it’s possible to profile the expected traffic and analyze all the traffic from working machine on separate hardware firewall unencrypted (using e.g. root certificate) and flag some encrypted or suspicious traffic that deviates from the expected pattern.
In the same protected room with the working machine side by side.
In the same protected room with the working machine side by side.
Then they can just rip take the drive? I am at a complete lost as to what you are saying. Which server is supposed to decrypt the disk?
You have two PCs in protected room: working PC and network disk PC.
You have separate display+keyboard+mouse for each PC.
When both PCs are powered down you first power on the network disk PC and decrypt the disk for working PC.
Then you power on working PC and boot from this decrypted disk over network and work on it using its own display+keyboard+mouse.
Or single display+keyboard+mouse but using KVM switch.
If someone will enters the room unauthorized then both PCs will power off and disk in network disk PC will be encrypted.
Let’s get this straight:
- You are worried that an attacker can extract get the encryption key from a malicious CPU
- You are worried that same attacker can get access to your server room and steal the disk
So if the CPU on the “network disk server” is malicious, it can leak the encryption key anyways, and when the attacker takes the drive it will be game over for you.
And I am not sure why you are mentioning javascript and stuff. Why would a server be running random remote javascript? A database server’s one and only job is to run the database software and answer queries. It doesn’t do anything else.
So how does this entire exercise achieve anything? If you think the “network disk server” has a trustworthy CPU, you might as well just use it as the actual database server. I really feel like you are just doing stuff in hopes of achieving something without actually considering the threat and whether what you are doing really stop said threat.
Yes.
The “network disk server” is not connected to the internet so it can’t be attacked remotely without compromising “working machine” first.
Then it’s down to probability:
A - probability that attacker can compromise “working machine”
B - probability that attacker can compromise “network disk server” from “working machine”
A*B - probability that attacker can compromise “network disk server”
Javascript was just an example.
Instead of javascript it could be some other unassuming code that will be brought on this working machine from outside.
Personal of “working machine” will work with data in database only on this machine, for example, running some analyzing software or using this data with some other tools.
These tools and code with algorithm for these tools could be brought from outside over the network or some USB drive for example.
For example, there will be Matlab installed and the Matlab source code files .m will be first created and tested on another PC and then brought to “working machine” to be run on the data from database.
These files could be compromised. Outright backdoor in these files would be noticed but if it’s hidden like some calculations a+b cd e/f then it’ll pass the checks.
It’ll lower the probability of successful compromise. A*B vs A.