My automated backup flow

Finally have this working. Many thanks to those that have helped along the way.

I have a SATA SSD (500G) that i use for the daily backups. I created a new logical volume on it. Cloned the debian template to this new SSD and created a “backup” app VM whose sole purpose is to be just a storage device.

I have cron setup to launch my script 1m after midnite

1 0 * * * bash /home/user/


qvm-shutdown --wait work
qvm-shutdown --wait finance
qvm-shutdown --wait proton-vpn

qvm-backup -y --profile ssdback

qvm-start --skip-if-running work proton-vpn finance

qvm-run --service work qubes.StartApp+chromium
qvm-run --service work qubes.StartApp+thunderbird
qvm-run --service work qubes.StartApp+org.gnome.Nautilus
qvm-run --service finance qubes.StartApp+org.keepassxc.KeePassXC
qvm-run --service finance qubes.StartApp+chromium

I havent implemented this part yet : executing this script on my backup appvm from dom0. I’ll do that after I have 21 days of backups

find /home/user/QubesBackup -iname q* -mtime +21 -delete

1 Like

Great workflow, but keep in mind: as long as your backup drive is permanently connected to your computer (potentially even built in) I wouldn’t consider this a reliable backup.

There is lots of bad stuff that could happen to your computer:

  • Someone could steal it
  • Some electrical malfunction in your local power grid could physically destroy your device, with a risk for all attached drives as well
  • Some other catastrophic event could happen (flood, fire, earthquake, … depending on where you live there is a lot that could go wrong).
  • As secure as Qubes is designed to be, there is still a small risk that an attacker could compromise dom0 and render all your backups useless.

Take a look at your data and think about its value. Would you miss it if one of the above-mentioned events were to happen? Then consider saving a backup on a physically detachable medium (likely a USB hard drive, but changing your SATA disk might also work) once in a while and store it in a safe place, preferably far away from your computer.

Agree 100% phl.

I have a USB SSD that I manually backup as well. I plan on automating that flow as well. So I’ll always have 2 copies of everything.

I would advise checking the return code from the backup script(s) in case of some io error. My backup process then falls back to an archive verify command on the last archive and spools the backup process logs to the local dom0 user to alert me of the problem.

This error processing has triggered several times this year already so I say its time well spent to check this status. You really don’t want to collect a bunch of broken archives thinking that everything is fine when it’s not.

My normal processing with a specific option flag will shutdown my machine when the backup is complete, unless there was an error. This leaves any error messages displayed so I can debug it the next morning.

With no option flags it will simply backup anything not currently running that has been used/modified since its last successful backup. This way I can backup some vm’s automatically while others are in use. The rest will be backed up during a nightly batch process as you are doing.

In my system each VM goes to its own archive in its own directory, which makes it easy to inspect changes to file size. Broken archives are much easier to spot that way and remove.

Interesting ideas slcoleman. I’m going to look into that,

Make sure to verify your backups (qvm-backup --verify-only). A backup is useless if you can’t restore from it. See here.

1 Like

Stylistically, scripts usually have no extension (no .sh), and they are usually invoked by running /path/to/<scriptname>, instead of bash <scriptname>. (the interpreter is already specified by #!/bin/bash).

You may want to do -iname 'q*' to ensure the glob is expanded by the find utility, and not the shell (from wherever it is run).

Thanks airemental, im still learning

1 Like