A new kind of back up tool

Ah! I didn’t realize there was the possibility of independent profiles! Excellent. (For those who must use the GUI, I should look more closely, perhaps profiles are supported there as well.)

(My own usage tends to be a mix of GUI and command line, so when on here, I tend to suggest GUI for GUI-only users. I’ll tend to use the GUI where the command line syntax is complex or hard to remember (that latter, for things I don’t do often.)

As time goes on, I’m shifting more and more towards making my “user level” AppVMs actually be named disposables…so presumably any compromise goes away when I close it. It’s also an excellent way to ensure caches are cleared (and not just browser caches, also thumbnail caches and the like). Of course a named disposable would never need backing up, and its DVM template (which is actually an AppVM) wouldn’t need to be backed up often; only when you change it (mostly to change a user configuration setting).

EDIT: “so presumably any compromise goes away when I close it.” should read “so presumably any compromise to the DVM goes away when I close it.” (If the template (either the TemplateVM or the DVM template-which-is-actually-an-AppVM) [geez, we need a new naming convention] gets compromised, then of course this isn’t true.

1 Like

I used to do the same, now I automatically back up all my appVM daily. I sometimes do manually run a full backup, if I know I might need disaster recovery, but it’s so rare I don’t mind checking the boxes manually.

1 Like

I’ll probablhy be going into the manager, and checking the DX machines off first thing when I get onto the system tonight. And back up templates only right after a big spasm of updates.

The one thing it isn’t, is automatic. I haven’t really figured out a good way to do that, one that works for my circumstances. (I know how to do it, I haven’t decided WHAT to do.) The current location of my backups doesn’t work well for “automatic.” I need to give this some thought, and soon.

One thing I’d like to see is an option to not back up a VM–even automatically–if it hasn’t been started or updated (which entails a start anyway) or had settings changed, since the last backup. Presumably, under those circumstances, it hasn’t changed. I don’t know if there’s a way to check those times, even in a script.

On my desktop PC I got 2 TB raid 1 storage directly attached to dom0, it makes backups very easy.

On my laptop, I have the same issue with storage. What I think I’ll do is make a NFS qube and use that to copy files from dom0 to a network drive.

Right now I’m copying to an SSD in the same box. Which means the SSD must be mounted and I end up mounting it to a VM…one of the ones I want to back up! Uh, not good. Getting this done better hasn’t been much of a priority; I should by rights be putting the backups onto the NAS; where the backups themselves will be backed up and duplicated. Something to do this weekend!

Like I said there’s certainly an easy solution, I just haven’t put much thought into it…yet.

5 posts were split to a new topic: Who is using Wyng for Qubes Backups? What’s your experience like?

The Qubes backup system has a security feature that most other backup options lack: authenticated encryption. When you need to restore from a backup, you may think the file you’re restoring from is your backup file, but what if it’s not? What if an attacker has replaced it or maliciously modified it in some way to attack your system? The Qubes backup system is designed to protect against this, allowing you to safely restore from backups in dom0 even when if your backup file has been “in enemy hands.” I’ve described this a few other times on the forum, e.g.:


Yeah, basically you should back up everything that would be impossible, too difficult, too costly, too cumbersome, too time-consuming, etc. to recover or replace, and not worry as much about backing things up that are easy and cheap to recover or replace. So, if your templates can be completely recreated from some notes that you have backed up, then perhaps the only reason to back up the templates themselves would be convenience and time savings (i.e., being able to restore from the backup instead of having to go through the recreation steps), or for the sake of redundancy (e.g., in case your notes turn out to be wrong or missing something). For most users, it probably isn’t a big deal if they forget to write down that they installed a certain app, since the next time they needed it, that need would remind them of it, and they would install it at that time. In such cases, the stakes are relatively low, so there is not a great need to be fastidious about it. The higher the stakes are for you, the more effort you should devote to this stuff. If you can’t afford more than N hours of downtime, for example, then by all means, back it all up.

5 Likes

The Qubes backup system has a security feature that most other backup options lack: authenticated encryption.

Bacula has that too.

unman, post:11
It should also be said that taking a full disk image is not in any sense
a useful backup in Qubes. Yes, it will allow you to restore the system at
a specific point in time, but that doesn’t make for a good backup regime.
Regardless of the time involved, and the multiplicity of cloned drives
needed, it is really not suited to the problem.
How often will you be making that disk image?
What will you do with old images?
Will you take a new clone every time you update dom0 or a template?
Every time you do some work?
Really??
What happens if you discover you deleted a crucial entry in KeePassXC
4 months ago?
Clonezilla is a great tool for disk imaging, no doubt. But I don’t see
full disk imaging as particularly useful in Qubes, and certainly not to
replace qube and data backups.

Isn’t he just pointing out that system-level backups can’t replace file-level backups, and that you are most likely going to need both?

You want system-level backups for disaster recovery, but they are resource heavy to produce and waste storage, you also don’t want to reset your entire system every time you need to recover a single file.

Your idea of not using any backup tools to me seems like a terrible idea, most people don’t just accept to lose all their data if their system gets corrupted. Yes, you can make a shell script that reinstall all the software, but you make the backups to save the data you can’t just download from the internet.

Of course. Only, I keep my file-level data tidy in 2-3 vault-alikes, and I copy that manually to external storage. On a system level, I started to take notes, and I intend to restore the system manually (actually, to reinstall, mostly) in order to better learn and understand it. I just don’t trust too much to scripting.
So, I do backup, but in a different way - without any backup tool, pretty much like @deeplow does. I have one or two old full system backups though on external storage.

They say: when you want to buy a car, you ask a car seller which car he drives.
So, what “car the devs drive” when backup is?

I forgot: additional reason for me to take this approach was that I faced so many issues with backup/restore, here at the forum. So, I think I learned two things, one learned through others’ mistakes, and the second learned on my own:

  1. never use any backup tool, but backup manually, and recently,
  2. never upgrade in place anything, especially in Qubes.

Well I mentioned before I hadn’t really given a lot of thought to backups; I had basically been going into the provided tool sporadically and doing them.

I finally gave it a closer look last night, and I’m about halfway through with doing the following:

I made a list of all of my VMs and divided them into several classes: Templates I can’t readily recreate, templates I can readily recreate, system stuff (i.e., all the networking qubes, sys-usb, and also the stuff I wrote for split veracrypt), and finally my App VMs. (And incidentally, there’s overlap in these classes as described…so if it’s one of the system stuff qubes, I exclude it from the other classes). Oh, and dom0 which must be handled with a bit of care.

I basically turned my list into a script to run a separate backup for each, saving a profile. (Then I found out the profiles live in /etc/qubes/backup and they are readily editable.)

The documentation for backup (i.e., man qvm-backup) implies that it’s possible to run backup from some other VM, but if so you must use a profile. That’s true…but only after you muck with the policies in /etc/qubes/policy.d. But I want to be able to run from another VM, so I did just that. [Note: Before someone admonishes me: Yes, I did this in a lower-numbered copy of the official file.]

So now, with profiles created, I can put scripts on a dedicated Backup VM that will invoke the backup with the profile, then rename the backup file to have the profile name as a suffix. That way, by eye, I can tell exactly what kind of backup it is. [I just realized as I was typing this, that one could tell by putting backups into different folders by type, obviating the need to run the backup from the destination VM. So I might want to do that instead and unmuck my policies!]

So at this point, I can create a cron job, which: starts the backup VM, mount the storage device to it, then does qvm-runs to fire off a backup of the appropriate profile. That will require scripts resident on the backup VM.

I can also create a profile that backs up, literally, the ONE AppVM I which really needs to be backed up every dang day, as opposed to all of them, which I want to back up less frequently, and put that in the cron job.

So: Now I have a schema that will work automatically, and back up the bare minimum every night; I can supplementally back up other things either regularly with less frequent cron jobs, or manually, as I realize “Oh, I just did a lot in such-and-such-app-vm, I should back it up.” I can either add it to the incremental profile and let it get swept up that night, or just do it right now. Or decide to run the profile to back up ALL AppVMs.

The point is you can set up profiles in whatever way makes sense to you, and have them back up at different times, at different intervals.

1 Like

Oh, and I said “dom0 which must be handled with a bit of care.”

That’s because the only thing on dom0 that gets backed up is essentially the home directory (this is mentioned in the documentation (otherwise I’d not know it) but needs to be stressed even so). But I’ve made changes in /etc/qubes; I’ve defined a couple of new qrexec services, and so on. So what should be done before backing up dom0 is to copy these areas and any others you may have altered, into someplace in your home directory. So a script to backup dom0 should wipe out the old copies in the home directory, make new copies, and only THEN call qvm-backup.

1 Like

A note about Wyng in this regard:

The current release (v0.3) can facilitate authentication if the contents of its metadata directory are signed with another tool, such as GPG. The next Wyng release v0.4 uses an AEAD cipher – either Chacha20-Poly1305 or AES-SIV – for both encryption and automatic authentication.

For those unfamiliar, Wyng backs up VM volumes in their disk image form (like Qubes Backup) but incrementally. This is for systems that were installed with LVM formatting (the Qubes default), and BTRFS support is planned for the future. The idea behind Wyng is to retain the low-risk profile of Qubes Backup while greatly increasing speed & efficiency.

On full dom0 backup/restore:

This can be very tricky even for advanced users. The presumed goal is to be able to restore all dom0 settings. Probably the least harrowing option is to keep a text file with an outline of all the dom0 changes you’ve made. This file could reside in a backed-up VM.

And while I do backup the Qubes root volume directly with Wyng, I still prefer to restore the system with a clean Qubes install in most cases, then restore individual VMs.

There is also a simple method to capture most of the boot volume information before a backup:

$ sudo dd if=/dev/nvme0 of=/boot-block count=2

$ sudo rsync -av --delete /boot /boot-bak

Then backing up the ‘qubes_dom0/root’ volume should store what you need for a full image-based restore of dom0. However, it might be wise to use sfdisk -d in addition to dd.

PS - The dom0 image restore will leave Qubes in a state where it can’t start any prior-existing VMs because their data volumes don’t exist. Although I succeeded once, there’s no guarantee you can get the Qubes daemon to recover properly from this state.

OTOH, having a dom0 image backup lets you later restore those volumes (under different vol names) to view the settings they contain when re-building with a fresh install.

PPS :slight_smile: - The dom0 image restore chances of succeeding is greatly improved if you keep a minimal set of VMs (say a template plus sys-net) stored in the varlibqubes pool which is located in the dom0 root partition.

Have you tried doing this?

What happens if you write to the same disk you are reading with dd?

Without the ability to stop/snapshot the drive, doesn’t all changes made to the drive during the time it takes to make the backup result in random inconsistencies?

@renehoj the important part for dd above is “count=2”, which is basically copying the boot blocks. This is important part for the device being bootable, with proper flags being set in those 2 blocks (64bytes/each?) in that regard.

On my side, I preferred to completely backup /dev/sda1 (boot) from dd perspective, but I agree with @tasket that the rsync of /boot content + bootblocks is preferable for /boot file level integrity (where if blocks were compared for integrity measurements, there would be differences from blake/sha256sum perspective on the block device from rsync approach vs dd’ing the whole /dev/sda1 partition).

On your question, the bootblock is setuped at install and should barely ever change. This should be tested and validated, of course. But that would be in the goal of having /dev/sda1 (or whatever) block level integrity attestation, which from my perspective (Heads) is not important/possible for the moment, since HOTP writes its counter under /boot/kexec_hotp_counter at each HOTP firmware verification/attestation(at each boit, basically). So /boot (sda1) changing at each boot prevents doing whole partition integrity validation anyway.

Again on your question, /boot should be touched, in OS perspective this time, only at times of dom0 upgrades. Each time grub/xen/kernel/initrd is deployed, the content of /boot changes. Heads having a digest of /boot file content will warn of changes on the file level (files integrity) but won’t warn of filesystem properties there.

Shirt version: the 2 dd default size blocks copied over by dd here should barely ever change. It would be interesting to compare such blocks over an OS lifetime install but it is not thought that backuping those blocks, nor a race condition from dom0 writing and having those blocks backuped at the same time could cause corruption, either at backup nor restore time.