OpenSSH/SplitSSH publickey leak/extraction mitigation

I took a look at split ssh and found an issue that can have OPSEC implications under a specific threat model.

In short: A compromised split-ssh client can leak the list of available public keys. Without customization it leaks a partial list of public keys to the servers per default.

Environment and threat model

You have multiple server. Every server has its own ssh keypair.

You want to hide the number and the actual keys (private and public) against your adversary, which is assumed to be able to compromise at least one server and traverse back to your ssh client.

You are working with split ssh.

Why one wants to hide this information

Mostly anonymity reasons. Your adversary might want to confirm, that the admin of server A also is the admin of server b.

Your adversary wants to enumerate all servers and therefore wants all public keys to check them against the servers ssh.

The problem

  1. When not using the nickname option in your SSH, your client offers all known public keys to the server upon connecting until the correct one gets send. You adversary only needs to compromise your server to get a partial list of ssh public keys. Additionally you may even have to allow for more failed attempts if your list of keys gets longer than 6 (or something).
    Even worse, your adversary might be able to look at the size and timings of packets to determine how many keys where tried and therefore can determine what server you are connecting when, as the order of keys tried does not change often.
  2. ssh-agent does not provide sufficient control over what public keys are offered to the ssh client. Lateral movement is expected to occur as soon as connecting so reducing password reentry time might not cut it.

Using a conf with nicknames does need you to store connection details, nicknames and publickeys on the ssh-client so this is not a viable solution.

I tried for a few days to modify ssh-agent behavior and talked to the devs on how to restrict offered public keys to the clients without finding a solution that works for me or does not need a reimplementation of ssh-agent.
What i learned was, that your ssh-agent cannot be restricted to only offer specific keypairs to your ssh-client. In fact it always offers every public key to every server, even when the ssh-client knows exactly what public key it needs.

What are the options?


So i came up with one vault qube that holds all ssh-keys (in cleartext at this moment). When i want to ssh into machine_a i issue sshconnect machine_a. A bash script creates a new disposable vm, copies the public and private key as well as the the config to the appropriate places in this machine, opens a terminal and starts the ssh session.

This is not a real split ssh, as the private key is on the ssh client itself, but it does completely fix the public key disclosure problem.

What do you think about this?

I am not sure if this is a good idea. I would prefer to create a disp-ssh-vault and disp-ssh-client and load that with the necessary information, but this is tricky as my initiating qube is not dom0 but an admin-vm. Getting policy tight is hard in this case.

Considering the security:

  1. Not encrypting the ssh-key was a deliberate choice without too much security implications as discussed in the split-ssh guide.
  2. having the public and secret key on the ssh-client qube seems fine, as the only attacking entity could be the server that already has the public key. Disclosing the secret key to it seems uncritical, as if the adversary has compromised the server, the knowledge of the secret key does not give him additional capabilities/information.

What are your thoughts?

1 Like

Why not just use classic split-ssh and make a vault for each ssh key, then confine each ssh-client qube to access one vault with qubes-rpc policy?
Am I missing something?

You are right, that would be the best option.

Unfortunately this would mean that i have to manually create one vault and one template for every server, which is pretty annoying. My setup is a bit complicated and i really need to be able to create new ssh setups programmatically. If i decide that russia needs 20 more tor bridges right now, i really don’t want to set them up manually…

One could script this, but only by using dom0 as the place to issue the “make new server stuff”. This is impractical as outlined in the linked post. For this i created a admin-vm.

However this admin vm cannot (and should not be able to) modify the policy what the qubes created should be allowed or not allowed.

All i could do is try to restrict access by policy tags (all ssh-clients are allowed to access all ssh-caults) in dom0 and dynamically set the internal split ssh configuration with randomized qube names. The only thing preventing ssh-client-4235123, that should only have access to ssh-vault-4235123 form accessing ssh-vault-11111111 is that it does not know the correct name. Default target would not work either because of this, creating much inconvenience for me :frowning:. So much so, that i would opt to “allow all” in the policy. This would lead to me not needing to manually approve every request and consequently not being alerted to brute force attempts when not at the computer.

I thought about this, but was not able to come up with a clean minimal solution that would be able to handle all sorts of corner cases.

But thanks for pointing out the obvious, i will reevaluate my options with this approach and see if it might fit my use case despite the initial problems i faced when attempting this.

I tinkered around a bit.

When creating disposable ssh vaults and ssh clients and giving them the tag “ssh-client” and “ssh-vault”, i can only set:

  1. allow all sshAgent connections from all qubes with the tag “ssh-client” to “ssh-vault” without asking
  2. Ask every time but the user has to insert the correct ssh-vault name

Option 1 is good for usability. It just works. It is however insecure, as one has to pay attention to the notifications permanently to detect and interrupt any brute force attempts.

Option 2 is good for security. Every access to keys requires additional user interaction, which is fine for me. What is not fine, is that i have to enter a randomized ssh-vault name as a target.

Is anybody aware of a solution for this? I would like to simply “say yes or no” to a request without the need of filling in the target.

Another solution would be to create allow policies for more tags than the host can reasonably virtualize. That way i would create my client-vault pairs with matching tags, for example:

ssh-client-123456 and ssh-vault-654321 with tag “ssh-pair-1”
ssh-client-000000 and ssh-vault-1111111 with tag “ssh-pair-2”

and allow access between them without conformation. This would then need my admin qube to keep track of used pairs to not double spend them. Not too much of a problem for me, but added complexity nevertheless.

I am just wondering if this is worth the hassle, as an adversary with the ability to execute code on a qube that holds a private key could only extract the one private key to the server he has already pwned to get in that position in the first place, so effectively nothing of value is lost in my simple “copy the private key onto the ssh client” approach as far as i can see.

vm1 vm2 ask,default_target=vm2
this will prefill your target vm and you’ll only have to click approve.

– edit:
nevermind, I just noticed these qubes also have randomized names so it doesn’t apply I believe

Than you, but the problem is that i have multiple of those vaults that are created dynamically on demand. To do this they get a random name like qubes does for other dispvms like “ssh-vault-123456”.

This name is not known in advance, so i cannot set the default target in the policy file easily.

I could “preload” 100 or so specific rules with explicit vault names and try to keep track of all running vaults to not produce name collisions, but this seems ugly and more complex than necessary to me.

I opened another thread to discuss this “qrexec ask with domU chosen target” thing as i think this is very useful for many other scenarios.

Fair enough, that is indeed something that could be useful for other purposes.

For the time being, if the temp-vms are created by a script, wouldn’t you be able to append the policy as soon as the vm is created?
Is the script aware of what the hostname is ?

A not-so-pretty alternative could be qvm-volume list and then grep by the vm prefix (if there’s a running script)

The script itself is aware of the newly given name, so iwould be, if i would do this from within dom0.

The problem is, that i have chosen to do my scripting in an adminVM. This is done so i can copy stuff like long passwords into it easily and can move around information without doing anything in the holy dom0.

I took another look at creating the rules from within the admin qube and the admin API.

Learned how to List policies, get them and manipulate them from my admin qube.

But this would effectively give the adminVM absolute control over every domU as it cannot be restricted to only change policies regarding the qubes it creates.

Maybe i am missing something with the admin-API, but i think for that i would need the operators API that is not finished yet.

Did you test this? An example?



Rule in /etc/qubes/policy.d/10-testing.policy:

policy.List * adminqube dom0 allow
policy.Get * adminqube dom0 allow target=dom0
policy.Replace * adminqube dom0 allow target=dom0

Test rule in /etc/qubes/policy.d/40-testing.policy:

# This is a test

Script in adminqube, ~/echo:

# Simply echo what qrexec call returns
cat >&$SAVED_FD_1

Script in adminqube, ~/escalate:

echo "sha256:df683f66bdcd3e118350d835c58b90f0ee319e678650058bda9fe5afef411233"
echo "# <Insert ASCII-Porn and any rule you like here>"
exec cat >&$SAVED_FD_1

Both scripts need to be executable in the adminqube of course chmod +x echo && chmod +x escalate

Listing the rules
qrexec-client-vm dom0 policy.List ./echo gives:


Getting the rule:
qrexec-client-vm dom0 policy.Get+40-testing ./echo

# This is a test

Modifying the rule:
qrexec-client-vm dom0 policy.Replace+40-testing ./escalate

And indeed this is the file now:

# <Insert ASCII-Porn and any rule you like here>

I have not found anything how to restrict a qube with this privileges to edit any rule it wants to. As far as i understand it, the operators API should be used for that, as the admin API does not offer granular access control.

Edit: This took me a while to figure out.

Here are some keywords for the search so ppl can actually find this
qrexec admin policy
admin qube change policy

Did you try to use @tag:created-by-adminqube in your policies?

Yup. IIRC then it does not work at all.
But i cannot see how this would work, as i only can change whole files and not lines.

Ah, I didn’t see it anywhere in your policies. Just a noob question, or it isn’t clear too - what VMs you created with DomU adminVM?