I’m currently writing a script to: back up files (encrypted) from one qube > send it to another qube with access to my Google Drive, then > use rclone to sync with Drive.
(My motivation is simply to have a copy of my data backed up in the cloud, not wrapped up in a Qubes Backup format, which is hard to access (ex-Qubes) and generally too large).
The physical steps are these:
launch script in my “dataVM” (terminal)
enter “GoogleVM” (dialog box for qvm-copy)
(open terminal in GoogleVM)
launch rclone script in GoogleVM (terminal)
So that’s quite a bit of interaction with the mouse and keyboard, partly because I am running two scripts on two qubes.
Is it possible to have a script in dom0 do this?
qvm-copy suggests it can do a lot: it can “reach in” to a qube and copy out the files. It can pipe data across to another folder in another qube.
What if the destination qube didn’t have to be nominated, if it was pre-set to your GoogleVM? No keyboard input.
Could this dom0 script then run bash commands in the receiving qube (i.e. the rclone bit)?
Is this too complex or inadvisable? Or does it fundamentally break the isolation model, therefore forbidden?
Being a bit more imaginative, it would be nice to have something like this integrated.
set up a nominated online account in a dedicated receiving qube (so you can fool around with firewalls, etc, as well if you want)
A folder like “QubesIncoming” - maybe “Sync Folder” - in every qube. Dump your files in Sync Folder, right click a menu and it ends up synced to the appropriate folder on your nominated online account in the Google qube. Or which ever qube/cloud you set up.
What if the destination qube didn’t have to be nominated, if it was pre-set to your GoogleVM? No keyboard input.
You can do this by changing /etc/qubes/policy.d/90-default-policy to have a rule above the rule for Qubes.Filecopy that looks something like this @anyvm GoogleVM allow. This means that any qube can copy any files to your GoogleVM qube, which means that the GoogleVM qube would be a single point of weakness, breaking the security model.
A better idea would be to have a script in dom0 that runs qvm-backup with your choice of VMs on a timer. Maybe every day or so, or whatever works best for you. Then, you could write a script in GoogleVM that uses inotify or something like that to monitor
for new backup files. Then, GoogleVM could push them to the cloud with whatever backup software you like.
Once that is done, delete the old backup files on GoogleVM.
Hope you find this useful.
unfortunately, that’s part of the underlying situation I am trying to get away from.
Backing up with Qubes Backup means being locked into Qubes in some way - you restore to a Qubes environment.
I don’t know how many Qubes-compatible computers they average person has, but I have only one. If that gets stolen (in this neighborhood, its distinctly possible), I’m stuck.
I know there is a way to access your data per Qube via a ‘normal’ linux box. I’ve done it once. But its a long and complicated way of getting at your stuff. It needs special software. Its not even documented properly - instructions in the Documentation are wrong, you need to follow an unfinished draft on github (that most people won’t find).
I just want a ready, small backup that can be accessed from more than just a Qubes platform.
Then if you want to run rclone after files are copied you can add these commands in qvm-copy script if you will call file copy from terminal and in qvm-copy-to-vm.gnome script if you will copy files using file manager.
In dataVM add these lines in /rw/config/rc.local:
cat << 'EOF' | tee -a /usr/bin/qvm-copy /usr/lib/qubes/qvm-copy-to-vm.gnome
/usr/bin/qvm-run-vm --no-gui @default rclone ...
FWIW, the Qubes backup format is specifically designed so that you can always access your data without Qubes OS in a general Linux environment:
Also, you can easily make your backups smaller by simply choosing which qubes you want to back up. You don’t have to back up everything. The Qubes backup system adds very little extra size to the backup file beyond your own data.
The advantage of using the Qubes native backup tool over a “roll your own” DIY method is that it’s defensively designed from a security perspective. The data is authenticated in addition to being encrypted, and the Qubes backup restore code is careful to parse an untrusted backup file as little as possible before authenticating it, the assumption being that the backup file might be compromised or replaced with an intentionally malformed malicious file. This is what makes it safe to restore from Qubes backups regardless of where they’re stored (e.g., in untrusted cloud environments). By contrast, attempting to restore from the method you describe could result in your system being compromised in some way.
Not true at all; see above. Not only is there no “vendor lock-in,” but avoiding lock-in is an explicit design goal.
Oh, so you already know.
Okay, but, to be fair, it’s explicitly billed as an “emergency” restore method to make sure you always have access to your data in the end. The easy method is using the Qubes GUI restore tool.
Eh, “special software” makes it sound like some kind of proprietary system, but we’re just talking about scrypt, which is also open-source and in many Linux distro repos. It’s just not as pervasively preinstalled as something like openssl, so we drew special attention to it to make sure users who want to be ultra prepared know what they need in advance. (openssl was replaced with scrypt for security reasons.)
Hm, I don’t think that’s actually true, though, is it? I thought the order of events there was something like:
I tested @rustybird’s draft instructions myself (simulated an emergency restore) and asked if there was an easier way to handle one particular step to make it easier for less technical users.
@rustybird suggested using xmlstarlet, but I wanted to avoid yet another uncommon scrypt-like dependency, so we asked @marmarek if he had any alternative suggestions.
But I think @marmarek has been too busy to reply, so I guess both PRs kind of stalled out.
AFAIK, the live instructions still work fine. I tested them a long time ago. I always do a simulated emergency restore whenever there’s a new backup format version change or a change to the published emergency restore instructions, then I store a copy of the latest emergency restore instructions with my backups. I wouldn’t do that if I thought the instructions were wrong, and we wouldn’t just leave them on the live site or in the main repo if they were wrong (at least not without a warning of some kind).
Just because our improvement PRs haven’t been sorted out and merged doesn’t mean the current instructions are wrong. I think they’re unclear and confusing in one particular place, which is why I opened #1279 in the first place, but that’s not the same as being wrong or not working. They just haven’t been further improved yet, but that’s always true of everything that’s capable of being further improved (which is most things).
All I can tell you, @adw, is that when I tried to use the live instructions, I hit a step where I couldn’t make sense of it. It was “endgame” for me until @rustybird threw me a link to the github version. The thread is here, along with a couple of suggestions from me that I guess I should have contributed on github.
It would be really great to get that updated version live. I can only imagine it would be pretty stressful time if you had to use this emergency access for real - bad instructions would be the last thing you need.
@rustybird: For what it’s worth, if pushing the new instructions through even with the added xmlstarlet dependency is the lesser of two evils, I won’t stand in the way. I prefer to avoid extra dependencies where feasible (to avoid exactly the “special software required”/“vendor lock-in” perception discussed above), but when you need 'em, you need 'em (like with scrypt).
(Also CCing @unman as new documentation maintainer.)
Also, while I appreciate the need for integrity and security - Qubes does that well - the emergency procedure is a pain in the a less than quick and easy. I mean, I’d hate to have to do that across multiple qubes.
For those with less stringent needs for security, a more accessible (i.e. multiplatform) approach to backing up a sub-set of files is attractive. See this thread for someone else in a similar situation.
It’s possible to script it so that you don’t have to manually repeat the steps for each qube, but granted, that’s probably not easy for most users either.
Yeah, but you have to understand that security is the raison d’être of Qubes OS. Security always comes first. If it weren’t so – if the security were more on par with other OSes with “okay” security – there would be little reason for Qubes OS to exist, since the more mainstream OSes are much more convenient. So, if you have “less stringent needs for security,” then native Qubes tools are probably the wrong place to look for your personal use case. There’s nothing at all wrong with that, but it wouldn’t really make sense for the Qubes devs to try to create less secure solutions, IMHO, since they create more value by focusing on their own specialty (engineering high-security systems).
Yes, I take your point. It does seem like you are saying “like it or leave it”, though - which is a bit harsh.
Qubes has mitigations against calamitous loss (by theft, damage, etc), yes. But they have detractions. They aren’t effective for small-batch backups on a high frequency basis (e.g. backup a thesis document every few hours, offsite).
The loss of a document (or whatever data) by some calamity like theft, damage or other physical damage must also be considered as a threat if one is designing for the total suite of threats. In particular, in the Qubes parlance, it seems a reasonable consideration. Its a universal threat, in fact.
How much greater security risk is this idea, anyway?
As cleverer people than me have commented, above, we’re talking about operations that are functionally equivalent to what qvm-copy and qvm-run do.
there is an over-the-net connection to Google Drive, Dropbox client, (or whatever) in a particular Qube - is that the risky element here, opening the door to penetration?
the operation (call it qvm-sync) would require user input to initiate it, so its not as if it was  silent and automatic.
I guess the tradeoff here is between two risks: [penetration through Google Drive] vs. [dataloss by calamity].
I acknowledge that how the devs allocate their scarce time is ‘beyond my paygrade.’
Supplementary question - can Qubes be used within an institution, using company servers? Does that inherently compromise security?
It depends on how their network is configured. But if you just want to access the university/company network from your qube then I don’t see how it’ll compromise Qubes OS security as this is a standard Qubes OS activity and it doesn’t require you to have any communication between dom0 and university/company server.
Didn’t intend to be harsh. That’s why my very next sentence was, “There’s nothing at all wrong with that […].”
That’s why Qubes makes it easy to design your own system to meet your needs. For example, you can use Dropbox in one qube (or a few) that need frequent backups, while your other qubes (which don’t have that need) can have a smaller attack surface. I think the mistake is assuming that Qubes OS has to have its own solution to every problem, rather than recognizing that it gives you the ability to have as many virtual machines as you want, and you can then use preexisting solutions within those machines as you see fit. It’s like the difference between buying a house with every appliance and piece of furniture you could ever want built-in versus building empty houses and letting buyers shop for their own appliances and furniture, since everyone wants different stuff.
Yes, that can be done. No, it doesn’t have compromise security; it just depends on the organization’s security needs. ITL specializes in helping enterprise clients do exactly that sort of thing. They can customize Qubes OS and help deploy it at scale to a client’s specifications. (Such inquiries should be directed to firstname.lastname@example.org.) This is where you’d expect to find Qubes OS to have its own built-in solutions, except they wouldn’t be solutions catering to a general open-source audience. Rather, they would be solutions for every problem that the specific client needs solved, and the resultant code wouldn’t necessarily be open-source. (That’s a discussion between the client and ITL.)
I admit I’m at the limit of my understanding here, but that said - would the rule you suggest here have to operate two-way? In other words, could it be set to allow copying/traffic as AppVM → GoogleVM, but not AppVM ← GoogleVM. That would mitigate the point of weakness somewhat.
I think, in my imaginings, I was considering a single VM as a portal to the cloud drive - a single VM as a single, minimized attack surface (e.g. custom firewall rules for traffic only from Google Drive).
I was also thinking of isolating different parts of ‘my life’, as per Qubes model, and not having a unifying and identifying connection between several domains (e.g. workVM, personalVM, etc., which each use different VPNs, too) and e.g. Dropbox.
On a side note, I tried to make a template with Dropbox repositories installed to install the client, and failed completely. Even with the documentation, not all of us find it “easy” to… meet your needs", unfortunately.
Which leaves a user like me to sit here dreaming of baked-in Qubes solutions to my problems.
Ha, thanks. When I get my enterprise set up, I’ll come knocking. /s Actually, if I did I possibly would, so its not a bad idea.