Awesome timing @whoami. I just finished the second part that explains the VM interconnection.
Note: I am using my regular vault VM for SSH keys and therefore will call the respective VM “vault” throughout this post. It is very much possible to use a separate VM with another name (as you did with your vault-keepassxc for example), just adapt accordingly (it should even be possible to use multiple SSH Agent VMs but I have no experience with this and don’t know how the RPC stuff would need to be adapted for this to work).
Let’s follow along with Denis Zanin’s guide (link in your first post) because it worked well for me (and I don’t want to take too much credit for other people’s work, just trying to explain a bit more what we are configuring here).
First, we need to install software in the TemplateVM(s) which both the vault and our SSH client VM are based on.
In a default setup, this would be one VM, but you might have more.
If it’s Fedora-based, do sudo dnf install nmap-ncat
, if it’s Debian-based, go with sudo apt install ncat
, if it’s anything else chances are you know how to install software in this Linux distribution ;).
The software differs from what Denis Zanin is suggesting. This is because
- (open)ssh-askpass won’t be needed if we go with using KeePassXC as our SSH agent.
- I think his package list for Debian is flawed. The relevant thing we want to achieve with these packages is to get the
ncat
program, which is provided by the above-mentioned package. However, I have no practical experience with using a Debian-base VM for this task so I might be wrong.
Using an editor of your choice create the file /etc/qubes-rpc/qubes.SshAgent
. Use sudo
to make sure you create the file as root user. Put the following content inside:
#!/bin/sh
# Qubes App Split SSH Script
# originally from deniszanin.com/using-split-ssh-gpg-in-qubes-os/
#
notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAIN"
ncat -U $SSH_AUTH_SOCK
This code is meant for the vault VM and does two things:
- The first command is sort of a safeguard. It will show a message bubble everytime an AppVM requests access to an SSH key.
- The second command creates a connection to the SSH socket. This connection will be used to send the client request to the appropriate program (KeePassXC, in our case).
The template should now be ready so we can shut it down (e.g., by executing shutdown now
in the terminal that is still open).
Go on by configuring dom0. Create the file /etc/qubes-rpc/policy/qubes.SshAgent
(again use sudo/the root user) and insert the following content: $anyvm $anyvm ask
If you read about qrexec while waiting for me to write this you might already know that this is how we define the types of connections that we want to allow for the RPC request qubes.SshAgent
(the one we just created).
The policy we chose allows any VM to be the SSH client and the SSH agent. We want to be informed every time (“ask”) by dom0 when this RPC request is about to happen and can then decide whether or not we want to allow it.
There are numerous ways to fine-tune this policy.
For example, if I am sure that my vault VM will be the only SSH agent ever I could use $anyvm vault ask,default_target=vault
.
If I don’t want a dom0 pop up for one specific client VM (because I trust it enough) but still show them for everything else I would go with:
SUPER_TRUSTED_VM vault allow
$anyvm vault ask
That’s it for dom0. For more details on policy files see https://www.qubes-os.org/doc/qrexec/#policy-files
Denis Zanin goes on with creating two new VMs for the SSH vault and SSH client. I skipped this step because I will be using my existing VMs. If you want dedicated VMs I guess you know how to create more VMs. Just make sure to base them on the template that we just modified so they have the right software installed and do not attach any network to the SSH vault VM.
Security side note: If you are considering split-SSH as an additional security layer it is probably reasonable to also think about which VMs you will be using SSH in. I for instance have a dedicated “admin” VM for these purposes. Depending on how many systems you plan to access and where they are located, it could also be nice to have different VMs with different firewall rules for Intranet and Internet administration.
Step 3 of Denis’ guide is about configuring the SSH vault VM. As we already configured KeePassXC (thanks for your screenshots, I think they add much value to the documentation for others!) there is not much left to do. If you like, you could create an autostart entry for KeePassXC (see the last section in https://www.qubes-os.org/doc/software-update-domu/) and make sure that the vault VM starts when you boot your Qubes (configured via the VM’s settings in dom0).
Shut down the vault VM now.
Last, we (just as Denis does) configure the SSH client.
The first file to edit is /rw/config/rc.local
and again, we need root rights for this. Append the following to the end of the file.
# SPLIT SSH CONFIG
# for ssh-client VM
# file /rw/config/rc.local
# from deniszanin.com/using-split-ssh-gpg-in-qubes-os/
#
# Uncomment next line to enable ssh agent forwarding to the named VM
SSH_VAULT_VM="vault"
if [ "$SSH_VAULT_VM" != "" ]; then
export SSH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM
rm -f "$SSH_SOCK"
sudo -u user /bin/sh -c "umask 177 && ncat -k -l -U '$SSH_SOCK' -c 'qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent' &"
fi
By setting SSH_VAULT_VM to “vault” we set the regular vault VM as SSH agent for this particular client. All requests will be made against this VM, so adapt here if required for your own setup.
In case this variable is set (if it is not, we would probably want to use local SSH) we set up and prepare a local SSH socket in the client VM and then initiate the communication between client and server by connecting the client SSH socket with the remote one in our vault VM using the RPC request that we created earlier. In other words: This is where the magic happens.
Make sure that this script is executable (it probably already is but it doesn’t hurt) by running sudo chmod +x /rw/config/rc.local
in a terminal of your client VM.
Then, also edit the file /home/user/.bashrc
(no sudo required this time) and add the following at the end:
# SPLIT SSH CONFIG
# for ssh-client VM
# from deniszanin.com/using-split-ssh-gpg-in-qubes-os/
#
# Append this to ~/.bashrc for ssh-vault functionality
# Set next line to the ssh key vault you want to use
SSH_VAULT_VM="vault"
if [ "$SSH_VAULT_VM" != "" ]; then
export SSH_AUTH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM
fi
By specifying the environment variable “SSH_AUTH_SOCK” we let our SSH client know that it should use the socket file which we prepared for remote communication with the SSH agent in the vault VM.
Shutdown your client VM now. The set up of split-SSH should now be completed.
As I said in my previous post, if we configure our SSH client correctly we can load more then five keys into our SSH agent. So let’s do this.
Start your vault VM and unlock your KeePassXC database.
Make sure that you already inserted an SSH key in there and that it is correctly loaded (check with ssh-add -L
, see my previous post for more info).
Start your client VM and issue an ssh-add -L
on its command line as well.
A dom0 window should now pop up informing you that your client VM would like to initiate the RPC request “qubes.SshAgent” and asks for the target VM that this request should be executed against. If you specified a default_target
in the policy, this VM should be selected now. Otherwise, just type it/choose the vault VM from the drop down list and accept the request, then the output of the command in the client VM’s terminal should appear and show all of the SSH keys that have been loaded through KeePassXC.
If you came this far, this means your split-SSH is basically working!
When I set this up myself, I first had issues to use all of my different SSH identities through this approach, because I had no way of specifying them individually and my SSH calls would fail with authentication errors. In a regular SSH setup, you would provide the path to the public key by adding the -i
switch to your SSH command or by specifying the “IdentityFile” setting in your SSH config. But since we are using a split approach, our client VM does not have this public key.
There might be a more elegant solution to this problem, but the obvious way is to copy over the SSH public key to our client VM and store it there permanently. As the public key is meant to be, well, public, there is no security risk attached to it.
So qvm-copy
your public key over and store it in a location of your choice. I go with the same setup that I described for my vault VM. One folder for every host to have everything cleanly separated.
My SSH config (home/user/.ssh/config
) has a block looking like this for every host I configured:
Host FOO
Hostname foo.example.org
User phl
IdentitiesOnly yes
IdentityFile /home/user/.ssh/keys/foo.example.org/id_ed25519.pub
The option “IdentitiesOnly yes” is crucial and was required to make it work for me. It limits the keys that my SSH client presents to the server to the one that I specified in the “IdentityFile” option. Otherwise, SSH might decide to provide other keys as well. Apart from obvious usability problems (I might get locked out if I try too many wrong keys) this also has a privacy implication. A rogue SSH server which learns different public keys might be able to link different online identities of mine.
So, after you worked through this wall of text, I hope you not only have a working split-SSH setup, but also understand the changes you made to your Qubes system.
If anything remains unclear, just ask.