Automation of remote administration

I have a little project to automate some stuff. I would love to get your opinion on that.

Automate administration of remote administration (in QubesOS)

So remote administration. It’s nice. ssh keys, authenticated onion services, split-ssh, dispvms.

This is what it looks like:

Basically a split-ssh system using dispvms and authenticated onion services,
and another qube that is used for clearnet administration (ignore it, it
is not important right now)

Them process

So but how does one add a server into all that to make it chooch?

5 qubes are involved in my optimized process (my real life one is even more complicated and annoying lol):

  • [vanity] creates the vanity adress
  • [vault] has the password manager
  • [sys-connector] Whonix-gw with the auth keys
  • [tor-ssh-dvm] the ssh dvm
  • [tor-ssh-dvm-template] the ssh dvm template
  • [server] not a qube, but i will call it one for simplicity

But adding a new server is annoying. One has to do:

  1. [vault] Create ssh keys
  2. [vanity] Create vanity adress for server
  3. [vault] Create authentication key for hidden service
  4. [vault] Create entry in password manager
  5. [sys-connector] Add authentication key to sys-connector and reload
  6. [vault] Add ssh key to password manager
  7. [tor-ssh-dvm-template] Connect via tor-ssh-dvm to server (IP)
  8. [server] Install tor on server
  9. [server] Setup tor onion service on server and start/enable it
  10. [server] Set up sshd to use also listen on the onion service
  11. [tor-ssh-dvm-template] Create nickname on dvm-template for new server
  12. [tor-ssh-dvm-template] Connect via dvm-template (to add the ssh fingerprint and see if it works)
  13. [server] Configure server to only listen on onion service

99% of this process is anoying shit i don’t want to deal with.

I want to grab 50 VPS boxes, click “go” somewhere and stuff happens until i got my ssh set up for all of them, not waste a week of manual work on this.

So lets do it! This is what my plan looks like.

Why did you…

Why vanity addresses?

Because i am dope. I need to create the onion address on the administrating machine anyways as i need the address for the key. I could read it from the server after installing Tor, but this makes stuff more complicated.

Why an additional vanity qube?

Because i have one with the max number of CPU cores anyways because fast as fuck boiii. Qubes are cheap! Could do it in the vault qube, but then this qube would have n-1 cpu cores idling 99.9% of the time, or would be painfully slow 0.1% of the time. Also there would be more code in the vault that can explode. The other option would be to use a dispvm that is created, mkp244o installed and then starts the brute forcing. I don’t think this is possible if the initiating qube is not dom0. If there is a way: Please let me know!

Why an additional whonix-gw?

Mitigate the possibility of auth keys leaking in case there is an exploit for
Whonix-gw. This gw is only used to connect to my own services with authentication.

Why the tor-ssh-dvm?

If my ssh machine gets pwned, my servers are pwned. To minimize potential lateral
movement dispvms are the way to go for this.

Why is the vault calling everything?

The chain of commands should go from trusteduntrusted to minimize
lateral movement. While i do not prefer vault to do this from a usability
perspective, i think it is the safest option to do it that way. I also do not want to use dom0 because. More on that later.

Your returns go from untrusted → trusted, no good
Yeah. Not ideal. To be honest: I don’t think it is a big problem. Should the tor-ssh-dvm-template be malicious i am fucked anyways. The rest is not network facing and the stuff that i get in return would have been put into the KeePassXC anyways. If you have another opinion or some improvements, please let me know.

Why is the arrow going out of the actors abdomen?
Im funny. hihihi

How?

Dunno. I think of some python and/or bash scripts on the qubes that are called with qvm-run. The return values would be transferred via qvm-copy back to the vault. Vanity addresses with mkp224o. KeePassXC is a bit more complicated, as there is no functionality to add ssh-keys pro grammatically (yet). This will change when this will get merged. Should be in the testing build in a few weeks. Until then one has to compile KeepassXC. The Optional feature to safe the fingerprint in the description is also not implemented. Alternatively one can use the split-ssh solution without KeepassXC in the Qubes Github.

Known risks

ssh will offer the servers all keys (in deterministic order!) of all servers until one matches. This way if one server gets pwned the adversary learns how many servers i administrate. Not really that critical if you ask me. Also you have to increase the auth retries. One can fix this by fixing ssh-agent or hofix it by running multiple ssh-agents. Maybe sometime i will fix this but for now im ok with it.

A motivated network side adversary could by this:

  • enumerate number of servers (based on traffic amount and pattern)
  • detect when new servers are added (traffic amount and pattern)
  • detect what server i am connecting to

As mitigations i would add decoy keys.

I have not found a way to shuffle the order of offering, but i have not bothered much at this stage.

Alternatively one could create an ssh-agent service for every server but when i think of administrating many servers this might not scale too well.

Question

Is this a good idea to begin with? I see people usually do such stuff from dom0 as it could for example create a dispvm and set it up for the mkp244o vanity onion creation or could more easily transfer files between qubes (for example with this “hack”). But: I like zsh and don’t want to fuck up my dom0. For example i would give my script ip and password of the servers and i really **do not want to type a 50ish character long password into dom0, but want to copy it to where it is needed.

I had no luck with trying to transfer files from qubeA to qubeB without a dialog box. What i tired so far:

In /etc/qubes-rpc/policy/qubes.FileCopy

qubeA qubeB allow
qubeA @anyvm allow
qubeA qubeB ask
qubeA qubeB allow, default_targer=qubeB

And a few others.
Also for the later used dispvm’s that ssh into the server these would need to have permissions to use the split-ssh. Is there a way to allow split-ssh for all dispvms of a specific template? I tried some configurations but did not have much luck.

3 Likes

Well where ELSE would arrows protrude from actors?!?!?! :rofl:
OF COURSE they protrude from their abdomen! :stuck_out_tongue:


In seriousness, this is genius. I have been looking for a way to deploy Qubes OS in a corporate environment without it causing more problems than solving, and this might be just the way to do it.

The idea was to have a work device that employees could use for personal use, without you having to worry about them getting pwned and compromising your corporate stuff.

Qubes OS definitely fulfills this purpose, but you can break it SO easily if you don’t know what you’re doing… (just ask anyone who’s ever worked at an IT helpdesk what the dumbest question they’ve ever been asked was :laughing:)

The problem with work machines is that you can’t guarantee that the people you give them to know what they’re doing.

There’s only so many times you can say “Don’t do anything stupid in the work Qube, OK?” before you start to go insane…

…and then they do it anyway and get pwned…

This would allow remote administration of a machine, without compromising the location of the machine (mostly), and it would allow Qubes OS to be deployed in corporate environments with ease.

I’d be very interested in assisting with this. If you like, I do have quite a few Qubes OS machines lying around. Maybe I can set one up for you so you can test this? :smile:

I mean, it would also have other applications, particularly for anyone currently under surveillance, “on the run”, or anyone who just wants to keep their network setup not public knowledge…

I have a feeling your setup was geared a little more towards these people… :sweat_smile:

You’d be getting a public IP address, and there would be no NAT. I would also not put it publicly here. I am not a moron. :joy:

Thank you very much!

Im gonna be honest: QubesOS is the OS i have not broken for the longest time. (Maybe because i don’t use dom0 very often…)

The split-ssh part is out there! Take a look this.
I did change a few things like using dispvms for the ssh machines as i think this a much more secure without impacting usability at all. Also i added authenticated onion circuits + a whonix-ws with authentication for onioness and security (i really love Tor). Also some server nicknames for usability. Other than that it is like this guide.

Thank you very much!

I never used salt but i want to automate the setup of this whole thing too when the automatic “adding more servers” thing is working. So maybe i could need some help there in some time. At the moment my setup is done like i do it (fedora for vault, debian for ssh, keepassxc for the ssh keys, auth onions for ssh) but i would love to write a script that does something like an “installer” script and asks the users how exactly he want to do stuff like “I don’t need the onion stuff, but a corporate VPN” or something. Don’t have any idea on how to do this in salt tho.maybe

Maybe, i am some kind of privacy/anonymity activists myself and like helping people with getting back their basic rights.

But anyways: First things first. Before automating the creation of this construction, i need to solve the problem of automatic communication between qubes (without a dialog box from dom0). In the worst case i think i would need to call everything from dom0… But if there is any way around it, i would really prefer it that way.

That’s why we’re all here :slight_smile:

I wonder how long it would take before one of the devs came in and blasted us about this. “The dialog box is there BY DESIGN!!!” :rofl:


I’ll have a look and get back to you about cross-qube communication without dom0 approval. Probably not a good idea, honestly (opens up a whole world of potential issues), but I’m sure there’s another way. Maybe through the second whonix gateway and qubes-updater? Similar to the way a template with no network access can still get updates?


Another cool thing this could be used for would be REMOTE WIPE. I’ve had laptops seized before, and it would be marvellous if I could remotely wipe a VM.

Actually, there is an official mechanism for that: Qrexec: secure communication across domains | Qubes OS :slight_smile:

Maybe this could help? build and upload qubes-remote-support package · Issue #6364 · QubesOS/qubes-issues · GitHub

Yes… Yes it is. But: The capability is there to set exceptions for this. I think this is a very reasonable use case. I started the action, so i want it executed. If i choose to use a 8 character vanity onion this could take a few minutes on my machine. Should i leave it, the whole process would pause on a dialog box and more importantly if i don’t leave it, it would pop up randomly an there is the chance that i am writing something, hitting enter (into the dialog box) and ruining all :(. If i want to spin up like 50 new Tor bridges, i don’t want to confirm something 150 times in a dialog box…

That’s why i have chosen to initiate all calls from within the vault. It is the most trusted qube in the construct. Should it be pwned it is GameOver.
You could do it from dom0 but i stated my reasons why i am not a big fan of this.
Regarding security: This can be set up, that only vanity and ssh-tor-dvm-template can copy files back to vault. Worst case: They get pwned and flood vault until there is no disk space left, resulting in DoS. I am willing to accept this risk.

Thanks! This is what i intend to use. I have written a little script to create the vanity onions and copy the created keys back to the calling qube. But the requests don’t get accepted, or there is a dialog box :frowning:

Here is what i have done:

Script in [vanity] at /etc/qubes-rpc/sus.vonion

#!/bin/bash
#
# Vanity creator script v0.0.1
#
# Creates an onion address and sends it to the calling qube
# Dependencies:
# * compiled mkp244o in /opt/mkp244o
# * A RPC policy to allow qvm-move-to-vm <calling-qube>
# 
# 1 argument: nickname

read nickname
/opt/mkp224o/mkp224o -d $HOME/onion -n 1 $nickname
qvm-move-to-vm $QREXEC_REMOTE_DOMAIN $HOME/onion
rm -rf $HOME/onion

The policy in [dom0] at /etc/qubes-rpc/policy/sus.vonion

dev vanity allow

And to authorize the qvm-move-to-vm the policy in [dom0] at /etc/qubes-rpc/policy/qubes.Filecopy

vanity dev allow

I have tried many configurations for the qubes.Filecopy policy but nothing is working as expected.

Expected:
Move the files from [vanity] to [dev] without opening a dialog box.

Yep. There it is. 10 minutes :laughing:

Actually, I’m not a developer, just a long-time user who read a lot of docs and helped users enough to be promoted on this forum :slight_smile: See also: Qubes team - Qubes OS Forum.

Unfortunately I cannot help here. Maybe @unman can?

1 Like

I found the error.

The lines are (obviously) parsed in order. Having

$anyvm $anyvm ask

before

vanity dev allow

opens the dialog.

Success!

@Suspicious_Actions Have you looked into Qubes-remote-support - New features and support for Whonix 16 - #3 by Insurgo ? (It’s testing users members only for posting though)

A lot simpler then your design, but maybe some of the ideas there could be reused?
What I would be looking for, personally, would be reusing all the glue there to have hidden-onion setup and ssh redirection to dom0/admin-vm, with console session sharing. Having that would permit a user to start a remote admin session, share the shared secret by its own mean to remote admin, and land on a shared terminal. Then everything typed there would be seen by both parties, where at any moment the user can end the remote session. Of course, limiting access to that shared tty would be needed, but to me that would be the missing feature of qubes-remote-support, since have shared GUI access through tor is having a lot of latency. Where needed for some use case, most of the needed work still needs to be done from dom0/admin-vm.

Thoughts?

2 Likes