This thread has resulted in the following: split-gpg community guide
I am looking for a split ssh installation guide.
Which works (still works with the latest version) ?
Which is the best / “simplest” setup ?
Any remarks / hints ?
This thread has resulted in the following: split-gpg community guide
I am looking for a split ssh installation guide.
Which works (still works with the latest version) ?
Which is the best / “simplest” setup ?
Any remarks / hints ?
I think in the end most of these guides are working and describe the same thing, with smaller differences that aren’t important in your everyday use.
For example, the guide from Kushal Das describes how to set up a connection between the SSH client and vault VM using the file
/rw/config/rc.local, while the one from Peter Gerber uses a systemd service to achieve essentially the same thing.
When I researched split-ssh for my own setup I got the impression that most guides require that you know a little bit more about Qubes internals (VM interconnection, RPC policies, …) to really understand them and the implications of configuring certain files the way it is described. I guess setting up split-ssh is considered an advanced topic and thus readers are assumed to have this knowledge.
Before you go on with setting anything up, I would recommend to start by understanding the basic concepts of this:
I personally used both the guide from Denis Zanin which you already linked and the following documentation on Github:
In my setup I chose my regular vault VM as the VM where the ssh-agent lives. Imho, this is an extremely powerful setup because you can store your SSH keys in your regular KeePassXC database. For more details on how to use KeePassXC as ssh-agent, see, e.g.:
Thanks for your comments!
I like the additional KeePassXC security layer. This would also allow me to simply backup all-in-one with just one kdbx-file.
What are the pro and cons. of
I would appreciate if you drop your setup / comments as an express guide so that I can do a test to confirm that it is working fine. Ultimately, it could be a valuable chapter in the official docs (next to the GPG setup).
I’ll try and write something down tonight when I’m at home with access to my Qubes laptop.
Debating over decisions like
systemd goes far beyond Qubes specific stuff and ultimately has no impact on the security of your setup.
Both are a way to assure that a specific command is launched once the AppVM is started so that a connection between the SSH client and the SSH agent is established.
One person might prefer the
rc.local approach because there are a lot of (almost religious) debates about whether or not
systemd should be considered evil and avoided at all costs.
Someone else might prefer
systemd because running a command as a service allows to monitor and restart it in case something crashes (while starting it once via
rc.local is sort of a “fire and forget” approach).
I’d say you an go with any approach that the how-to guide of your choice uses.
Thanks phl I’m looking forward to you description.
Beside this I would like to open a more general question:
Does it make sense to have this splitted GPG + SSH + Password Manager “vault AppVM” as standard installation option (like the Whonix). I guess, it is not a big deal (I hope I do not underestimate the effort) and I guess 80% of all Qubes users did a vault for GPG, SSH and/or Password AppVM manually.
What do you think ?
Sorry, my offline life came in the way yesterday evening. I managed to write the first part about configuring your KeePass correctly so here it comes.
Regarding your question about standard installation options I suggest you create a new topic in the “General Discussion” section. It could be interesting to think about this, but it would be important to not break typical workflows by setting up a remote ssh-agent as default setting.
Anyway: Setting up KeePassXC
Make sure KeePassXC is installed in the templateVM that you base your vaultVM on. By default, this would be fedora-30 and you should have upgraded to fedora-31 or even -32 by now. KeePassXC should be installed in there by default but on older Qubes installations you might still have set KeePassX2 as your default (this one is not actively maintained and I don’t know if it even supports storing SSH keys).
Start KeePassXC in your vaultVM, go to Settings -> SSH Agent and tick the checkbox which says “Enable SSH Agent integration”. Close and reopen KeePassXC now (make sure to not only minimize it to tray if you have activated that option but to a complete restart of the application).
Open a Terminal in your vaultVM and create a first test key. I like to place them in different subfolders so they are cleanly separated:
mkdir -p /home/user/.ssh/keys/testkey
ssh-keygen -t ed25519 and follow the interactive prompt. use
/home/user/.ssh/keys/testkey/id_ed25519 as file to save the key and specify a password for it (let’s use
TestPassword123 for now)
Create a new entry in KeePassXC for your key:
4.1. Click the “Add a new entry” icon and fill out the usual fields
4.2 “Title:” Test key
4.3 The “Username:” field is not important for the method to work. Use something that helps you to identify/distinguish your SSH keys
4.4 “Password:” TestPassword123 (KeePassXC will use this to unlock your key)
Go to the SSH Agent tab and specify some more options:
5.1. Activate the first two options to load the key when you unlock the database and remove it when you close it
5.2. User confirmation is not required (Qubes will do that for us)
5.3. “Remove key from agent after” -> I personally don’t use this. As long as my kdbx is unlocked the key can be used. As every use will trigger a dom0 pop up I think this is secure enough.
5.4. Specify private key: Here you have two options. I currently use “External file” and specify the path to the key file we just created (
/home/user/.ssh/keys/testkey/id_ed25519). You could also attach the key file to your keepass database. I suppose this would allow you to just backup the kdbx and have everything important in it, as you described earlier. But as I said, I don’t have experience with this.
Security notice: Your vaultVM is considered trusted but nevertheless we store passwords in KeePass and not in plain text files. If you create a private key without specifying a password, this will write sensitive data to your vaultVM in plain text. You probably wouldn’t want to do this (but using this setup, there is not really a need to create private keys without password protection anyway).
ssh-add -L. your public key should now show up in the output.
Congrats, you succesfully set up the first part: Using KeePassXC as your ssh-agent. Once I have more time (I hope this will mean this weekend) I’ll document the rest of the steps to get the inter-VM communication working.
Also note that the fedora magazine article I posted earlier says that there is a limitation of how many keys you can add to your ssh-agent due to automatic lockout after 5 failed attempts. If we configure our SSH client in the correct way, this won’t be a problem and you can create as many keys as you like. I personally like to keep digital identities apart, and therefore use one key per user and remote system I connect to.
thanks again for your support.
I followed your description and made a screenshot story:
First, I upgraded the Fedora template VM, following this https://www.qubes-os.org/doc/templates/#switching (no screenshots).
Create new valut. I named it “valut-keepassxc”; no network access.
In “valut-keepassxc” Qubes settings select KeePassXC:
Skip this step if you have already your keys. If not:
Open the “vault-keepassxc” Terminal and generate a ssh key pair by command i.e.
ssh-keygen -t ed25519 -a 500 -f ~/.ssh/myssh_key
Launch KeePassXC and generate a database for the ssh keys:
denied auto-update (since no network active)
Following the wizard and make your security settings
Here, the only important field for the ssh-agent is “Password” this has to be ssh key passphrase.
Go to “Advanced” and add your keys.
Remarks: You need only the private key (here myssh_key) but if you want to use it to simple backup all-in-one you can also add the public key (myssh_key.pub). If all is still on the default AppVM setting you will not see your key in the browse window since they are hidden (.ssh folder). To make it visible you have to check this option:
Next, enable the SSH Agent within the KeePassXC Application Settings
Quit KeePassXC and open your database again. Now, you can check the SSH Agent integration status.
Final setting for the ssh key, select the private key (myssh_key)
Apply all settings and close KeePassXC (your database) again and verify your setup by command
ssh-add -L in the “valut-keepassXC” Terminal.
Lastly, just as a time / version reference for this screenshot guide:
Qubes 4; Fedora 32; KeePassXC 2.6.1
Awesome timing @whoami. I just finished the second part that explains the VM interconnection.
Note: I am using my regular vault VM for SSH keys and therefore will call the respective VM “vault” throughout this post. It is very much possible to use a separate VM with another name (as you did with your vault-keepassxc for example), just adapt accordingly (it should even be possible to use multiple SSH Agent VMs but I have no experience with this and don’t know how the RPC stuff would need to be adapted for this to work).
Let’s follow along with Denis Zanin’s guide (link in your first post) because it worked well for me (and I don’t want to take too much credit for other people’s work, just trying to explain a bit more what we are configuring here).
First, we need to install software in the TemplateVM(s) which both the vault and our SSH client VM are based on.
In a default setup, this would be one VM, but you might have more.
If it’s Fedora-based, do
sudo dnf install nmap-ncat, if it’s Debian-based, go with
sudo apt install ncat, if it’s anything else chances are you know how to install software in this Linux distribution ;).
The software differs from what Denis Zanin is suggesting. This is because
ncatprogram, which is provided by the above-mentioned package. However, I have no practical experience with using a Debian-base VM for this task so I might be wrong.
Using an editor of your choice create the file
sudo to make sure you create the file as root user. Put the following content inside:
#!/bin/sh # Qubes App Split SSH Script # originally from deniszanin.com/using-split-ssh-gpg-in-qubes-os/ # notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAIN" ncat -U $SSH_AUTH_SOCK
This code is meant for the vault VM and does two things:
The template should now be ready so we can shut it down (e.g., by executing
shutdown now in the terminal that is still open).
Go on by configuring dom0. Create the file
/etc/qubes-rpc/policy/qubes.SshAgent (again use sudo/the root user) and insert the following content:
$anyvm $anyvm ask
If you read about qrexec while waiting for me to write this you might already know that this is how we define the types of connections that we want to allow for the RPC request
qubes.SshAgent (the one we just created).
The policy we chose allows any VM to be the SSH client and the SSH agent. We want to be informed every time (“ask”) by dom0 when this RPC request is about to happen and can then decide whether or not we want to allow it.
There are numerous ways to fine-tune this policy.
For example, if I am sure that my vault VM will be the only SSH agent ever I could use
$anyvm vault ask,default_target=vault.
If I don’t want a dom0 pop up for one specific client VM (because I trust it enough) but still show them for everything else I would go with:
SUPER_TRUSTED_VM vault allow $anyvm vault ask
That’s it for dom0. For more details on policy files see https://www.qubes-os.org/doc/qrexec/#policy-files
Denis Zanin goes on with creating two new VMs for the SSH vault and SSH client. I skipped this step because I will be using my existing VMs. If you want dedicated VMs I guess you know how to create more VMs. Just make sure to base them on the template that we just modified so they have the right software installed and do not attach any network to the SSH vault VM.
Security side note: If you are considering split-SSH as an additional security layer it is probably reasonable to also think about which VMs you will be using SSH in. I for instance have a dedicated “admin” VM for these purposes. Depending on how many systems you plan to access and where they are located, it could also be nice to have different VMs with different firewall rules for Intranet and Internet administration.
Step 3 of Denis’ guide is about configuring the SSH vault VM. As we already configured KeePassXC (thanks for your screenshots, I think they add much value to the documentation for others!) there is not much left to do. If you like, you could create an autostart entry for KeePassXC (see the last section in https://www.qubes-os.org/doc/software-update-domu/) and make sure that the vault VM starts when you boot your Qubes (configured via the VM’s settings in dom0).
Shut down the vault VM now.
Last, we (just as Denis does) configure the SSH client.
The first file to edit is
/rw/config/rc.local and again, we need root rights for this. Append the following to the end of the file.
# SPLIT SSH CONFIG # for ssh-client VM # file /rw/config/rc.local # from deniszanin.com/using-split-ssh-gpg-in-qubes-os/ # # Uncomment next line to enable ssh agent forwarding to the named VM SSH_VAULT_VM="vault" if [ "$SSH_VAULT_VM" != "" ]; then export SSH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM rm -f "$SSH_SOCK" sudo -u user /bin/sh -c "umask 177 && ncat -k -l -U '$SSH_SOCK' -c 'qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent' &" fi
By setting SSH_VAULT_VM to “vault” we set the regular vault VM as SSH agent for this particular client. All requests will be made against this VM, so adapt here if required for your own setup.
In case this variable is set (if it is not, we would probably want to use local SSH) we set up and prepare a local SSH socket in the client VM and then initiate the communication between client and server by connecting the client SSH socket with the remote one in our vault VM using the RPC request that we created earlier. In other words: This is where the magic happens.
Make sure that this script is executable (it probably already is but it doesn’t hurt) by running
sudo chmod +x /rw/config/rc.local in a terminal of your client VM.
Then, also edit the file
/home/user/.bashrc (no sudo required this time) and add the following at the end:
# SPLIT SSH CONFIG # for ssh-client VM # from deniszanin.com/using-split-ssh-gpg-in-qubes-os/ # # Append this to ~/.bashrc for ssh-vault functionality # Set next line to the ssh key vault you want to use SSH_VAULT_VM="vault" if [ "$SSH_VAULT_VM" != "" ]; then export SSH_AUTH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM fi
By specifying the environment variable “SSH_AUTH_SOCK” we let our SSH client know that it should use the socket file which we prepared for remote communication with the SSH agent in the vault VM.
Shutdown your client VM now. The set up of split-SSH should now be completed.
As I said in my previous post, if we configure our SSH client correctly we can load more then five keys into our SSH agent. So let’s do this.
Start your vault VM and unlock your KeePassXC database.
Make sure that you already inserted an SSH key in there and that it is correctly loaded (check with
ssh-add -L, see my previous post for more info).
Start your client VM and issue an
ssh-add -L on its command line as well.
A dom0 window should now pop up informing you that your client VM would like to initiate the RPC request “qubes.SshAgent” and asks for the target VM that this request should be executed against. If you specified a
default_target in the policy, this VM should be selected now. Otherwise, just type it/choose the vault VM from the drop down list and accept the request, then the output of the command in the client VM’s terminal should appear and show all of the SSH keys that have been loaded through KeePassXC.
If you came this far, this means your split-SSH is basically working!
When I set this up myself, I first had issues to use all of my different SSH identities through this approach, because I had no way of specifying them individually and my SSH calls would fail with authentication errors. In a regular SSH setup, you would provide the path to the public key by adding the
-i switch to your SSH command or by specifying the “IdentityFile” setting in your SSH config. But since we are using a split approach, our client VM does not have this public key.
There might be a more elegant solution to this problem, but the obvious way is to copy over the SSH public key to our client VM and store it there permanently. As the public key is meant to be, well, public, there is no security risk attached to it.
qvm-copy your public key over and store it in a location of your choice. I go with the same setup that I described for my vault VM. One folder for every host to have everything cleanly separated.
My SSH config (
home/user/.ssh/config) has a block looking like this for every host I configured:
Host FOO Hostname foo.example.org User phl IdentitiesOnly yes IdentityFile /home/user/.ssh/keys/foo.example.org/id_ed25519.pub
The option “IdentitiesOnly yes” is crucial and was required to make it work for me. It limits the keys that my SSH client presents to the server to the one that I specified in the “IdentityFile” option. Otherwise, SSH might decide to provide other keys as well. Apart from obvious usability problems (I might get locked out if I try too many wrong keys) this also has a privacy implication. A rogue SSH server which learns different public keys might be able to link different online identities of mine.
So, after you worked through this wall of text, I hope you not only have a working split-SSH setup, but also understand the changes you made to your Qubes system.
If anything remains unclear, just ask.
Alrighty, this was fun with Qubes OS
Here my proof of your description, again in a screenshot story (I used some coloring to highlight clearly the different domains) :
Setup the client ssh VM, here I named it “server-connector”. This is the one which will be used to finally make the ssh connection to your remote machine (i.e. VPS).
Install the required packages in the template VM (here it is Fedora 32)
sudo dnf install nmap-ncat
(still in the template VM) Generate the file
qubes.SshAgent and add the following content by command
sudo nano /etc/qubes-rpc/qubes.SshAgent (looks a bit weird but keep the file name to qubes.SshAgent). If you do not have nano installed you can use any other editor or quickly install nano with
sudo dnf install nano
#!/bin/sh # Qubes App Split SSH Script # safeguard - Qubes notification bubble for each ssh request notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAIN" # SSH connection ncat -U $SSH_AUTH_SOCK
We are done with the template VM (Fedora 32). For safety reason it is recommended to stop this VM by simply command
shutdown now in the terminal.
Moving on in the dom0:
sudo nano /etc/qubes-rpc/policy/qubes.SshAgent and add the following content:
This is it for dom0. Close the dom0 terminal (do not shutdown; just close the terminal window)
Next station, open the terminal of the server-connector VM, open the rc.local file by command
sudo nano /rw/config/rc.local and add the following lines.
# SPLIT SSH CONFIGURATION >>> # replace "vault-keepassxc" with your AppVM name which stores the ssh private key(s) SSH_VAULT_VM="vault-keepassxc" if [ "$SSH_VAULT_VM" != "" ]; then export SSH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM rm -f "$SSH_SOCK" sudo -u user /bin/sh -c "umask 177 && ncat -k -l -U '$SSH_SOCK' -c 'qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent' &" fi # <<< SPLIT SSH CONFIGURATION
(still in the server-connector VM) open the next file .bashrc by command
nano ~/.bashrc and pasted the the following lines.
# SPLIT SSH CONFIGURATION >>> # replace "vault-keepassxc" with your AppVM name which stores the ssh private key(s) SSH_VAULT_VM="vault-keepassxc" if [ "$SSH_VAULT_VM" != "" ]; then export SSH_AUTH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM fi # <<< SPLIT SSH CONFIGURATION
Almost done. Shutdown both the “valut-keepassxc” VM and the “server-connector” VM and test your configuration that it works as expected.
For both tests the dom0 pop-up window will appear but it will make the connection only when vault-keepassxc will have the unlocked database accessible.
Thanks phl for all your explanation.
I will use this description for a VPS hardening guide (based on Qubes splitted ssh key + KeePassXC + Yubikey).
@deeplow feel free to purge and merge the content and make it available for everyone - in the official docs.
Such a nice thread when people are committed to helping others!
Thanks a lot @phl for the expertise and the time to look through those guides and write all the details of the best practices. And thanks @whoami for bringing up the question and following though with those nice guiding pictures.
I think this effort deserves to be consolidated into a guide! (Perhaps on the unofficial docs). I’ll try to find the time to compile this over the next weeks. But I’ll have to do some reading on ssh-agents first
@phl Is it ok if I grab some explanations from your posts too onto that?
Happy to see that I could help you @whoami.
It would bei great to help others as well by bringing this into proper guide form. I already thought about this but usually only have quite limited spare fine. Your help with this would be more then welcome
But as I said, most of my description is taken from other people’s guides.
Ultimately, I would love to see an integration into Qubes on a level like split-GPG (which, to be honest, I haven’t used so far, but the docs look interesting), but I don’t really have the time to work out what would be required for this…
Now, after this test I wanted to do my final setup. I did already a few starts and cross-checked all setting and files (especially AppVM names) etc.
I do not get it working and my guess the issue is the SSH_AUTH_SOCK value.
I know I can override it in KeePassXC but I am not sure what to put in.
Currently, it shows:
in the valut AppVM (KeePassXC SSH Agent Settings): tmp/ssh-m11FEqe8omNf/agent.564
It also shows the green check and the terminal check confirms that all is fine in the valut AppVM
ssh-add -L returns the correct public key.
in the client AppVM:
ssh-add -L returns:
Error connecting to agent: No such file or directory
SSH_AUTH_SOCK=/tmp/ssh-ymOM8YSX23Sm/agent.1059; export SSH_AUTH_SOCK;
SSH_AGENT_PID=1060; export SSH_AGENT_PID;
echo Agent pid 1060;
I tried already some ssh-agent kills and restart, reboots …
Do you have any hints for me?
(first reply by email to a thread. This still feels a bit weird but I
hope everything is formatted correctly ).
I’d say the most likely cause is a typo in one of your config files. I
personally have never seen such a message and have not touched
SSH_AUTH_SOCK in KeePassXC at all.
Re-check your configuration files in the client appVM (I currently can’t
look at the screenshots you provided, maybe there is something obvious
to see there).
It seems that the SSH_AUTH_SOCK value is not correctly exported as you
want. It should say something like “/home/user/.SSH_AGENT_vault-keepassxc”.
Edit: I just realized you already had a working set up for your screenshots and so they obviously can’t show any error. What did you change in the meantime @whoami?
My guess is you wanted to change from your testing vault-keepassx VM to your regular vault and forgot to change the VM name somewhere.
Look at the output of
printenv in your client VM. How does it look like? Do you see the variables SSH_SOCK, SSH_VAULT_VM, SSH_AUTH_SOCK? Do their values look okay to you?
Yes, I didn’t know that this is even possible. This whole qubes-os.discourse.group is really good. Nice user experiences.
Meanwhile I did the gpg-split. Also super easy and the setup is quickly done (as long as you keep to the basic setup).
Concerning, my ssh-split setup I didnt get it running again. … since I rethink my “qubes-setup” I will quickly reinstall it and make one valut for password (KeePassXC), ssh and gpg. I will keep you posted.
@phl @deeplow just 4yi I found the issue :
In my previous screenshot story I also copied the code but it had a copy&pasted issue. When doing a copy&paste operation in a small terminal window size (nano) displays “>” for long line commands therefore:
... notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAI> ...
instead of this
... notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAIN" ...
It took a while to find this … but know all works fine. I already have edited the previous post to copy the right text.
Haha. That quite an evil typo
Thanks for noticing it!
Unfortunately, I haven’t found this time. So if anyone else has the time, please pick this up
I wish I had. I barely have time to read this forum at times. I’d love to contribute to the docs one day, but it probably won’t be anytime soon.
@phl, for me it is the issue - time - but I’m keen to support Qubes.
I will start reading:
I guess, it will take some time but I will make it …with your support
I had a few other things in mind so I will first start with something more simple
Hey there @whoami and @phl (was going to mention you too deeplow but new users can only mention two people ). So I wanted to help in some way and I’ve created sort of a template guide for this using your work available here. The language and possibly everything needs some work but I thought it would be easier for you to fix rather than to write. I will keep working on this and get it as finished as possible in the meantime but if you want to work on it here are some obviously broken stuff:
nmap-ncatinstall prompt I left out the output because I didn’t run it myself and had nowhere to copy from.
I will be working on these issues and fixing the language to fit in more with the qubes-doc in general for the next couple of days. Feel jump in; fork or submit PRs or just ignore as it literally is your work. And also, thank you!