[qubes-users] "private.img", attaching and backups

Hello everyone,

after some years, my qubes backup script became too outdated and doesn't work any more. For writing a new backup script, I have a few questions:

- I only plan to backup the userdata, "home" folder within vms. In recovery, I am fine with reinstalling qubes os itself and all template vms. I wish to take btrfs snapshots of /var/lib/qubes/appvms while some vms are still running. What blockfiles do I need to restore the vms "home" files? I see:
  private.img
  private-dirty.imng
  private.img.123@<timestamp>
Some years ago, it was not possible to mount the "'changes since vm-start' heap file". Will I lose all filechanges since the vm started, if I don't siut it down before taking the snapshot, and only backup one of those files? Will I be able to restore most of my files from the running vm when copying over all three of these "private" files?

- My idea is to attach all vm blockfiles to a dedicated backup-vm, mount the private.img (or the like) there locally, and rsync its fs content to a remote network location.
What would be a good way to efficiently present all dom0 vm blockfiles to that one backup-vm?
Attaching all private.img (or the like) blockfiles one-after-another to the backup-vm (seems easy to break)?
Or copying them all to one big blockfile in dom0, and attach that blockfile to backup-vm (much overhead)?
Or is there any way to attach the appvms folder from dom0 to the backup-vm, instead of attaching blockdevices (folder =! blockdevice)?

Thank you for any hints,

Stickstoff

I want to keep dom0 secure, so I like block-attach as a tool.
Also, I would only attach the data from a read-only btrfs snapshot, to secure the vms a tiny bit more.
The backup-vm has no other tasks than sending all data away to a remote backup destination, to keep its attacksurface small~ish.
Sensitive data, like passwordsafes, are locally encrypted in their respective VMs before backup.
In my scenario, I am more afraid of losing data than being attacked and having lowered qubes' security guards too much, so the top priority is an automated remote backup.

Hi

I dont use btrfs, but I do backup some qubes while they are running, by
mounting the private.img and private-snap volumes. I think this is what
you are proposing, and it will work fine. Mount the volume ro.
I use a disposable to take the backup of the qube, and spawn it for each
new volume. It works well, and I dont have issues.

private.img is the file that contains /rw for your qube.
The timestamped entries are backups of your private.img. You can revert
to these as you will.

I dont recognise private-dirty. Sorry.

The generic problem with backing up a "live" image is consistency:
With a journaled filesystem write ordering is important for recovery, but when mounting a live image read-only, the image may change while you read it (unless you made a snapshot before). In general I'd advise to backup properly unmounted images only. Those WILL work, while others MIGHT work.

Ulrich

[/quote]
But for frequent, rapid backups (like Time Machine), shutting down the
qube is not an option. You should use a backup tool that will pass by
such read issues. Eg zpaqfranz will add the file to the archive but
write a Warning.
I used to use snapshots, but in the time I've been doing this I havent
encountered any issues. I mean that I have been able to access and
restore files at any point of time, and *that* is the point of the whole
process.
If you want to create a snapshot and attach that, do so.

Hi unman et al,

Hi

I dont use btrfs, but I do backup some qubes while they are running, by
mounting the private.img and private-snap volumes. I think this is what
you are proposing, and it will work fine. Mount the volume ro.
I use a disposable to take the backup of the qube, and spawn it for each
new volume. It works well, and I dont have issues.

Good idea with the disposable for each container!

private.img is the file that contains /rw for your qube.
The timestamped entries are backups of your private.img. You can revert
to these as you will.

I dont recognise private-dirty. Sorry.

private-dirty.img looks like to be my currently running vm image, instead of your
private-snap.img which I am missing on my system. My private-dirty.img
is a mountable blockdevice-file too.

I try to attach the private.img containerfiles to a VM.
I already learned [1] that directly attaching a containerfile from within the
qubes vm path (/var/lib/qubes/appvms/) is blocked.
So I take a snapshot and rename "appvms" to "appvms_working", and try to
attach this private.img to my backup-vm. The first step, losetup, works, the
device is created.. The second step, the actual attaching, then fails:

$ sudo losetup -f --show "/snapshots/2025-10-21/appvms_working/sys-usb/private.img"
/dev/loop10

$ sudo losetup --all | grep loop10
/dev/loop10: [0061]:18762 (/snapshots/2025-10-21/appvms_working/sys-usb/private.img)

$ qvm-block attach --ro -o frontend-dev=xvdap backup-vm dom0:/dev/loop10
qvm-block: error: backend vm 'dom0' doesn't expose device 'dev/loop10'

Is this the correct way to attach a vm private.img to a different vm, to take a backup?

Thank you for your help!

[1] 4.0rc1 qvm-block doesn't work with dom0 loop devices · Issue #3084 · QubesOS/qubes-issues · GitHub

You can find some example code here: blib/lib/os/qubes4/dom0 at 094b28346bb2acb5566ec2cfa099356166d96c6e · 3hhh/blib · GitHub

Is this the correct way to attach a vm private.img to a different vm, to take a backup?

You could do worse than check the docs - How to mount LVM images — Qubes OS Documentation

You can find some example code here: GitHub - 3hhh/blib: bash library blob/094b28346bb2acb5566ec2cfa099356166d96c6e/lib/os/qubes4/ dom0#L1116

Thank you for the hint, I didn't expect so much details within the
sourcecode.

Is this the correct way to attach a vm private.img to a different vm, to take a backup?

You could do worse than check the docs - How to mount LVM images — Qubes OS Documentation

Oh, good to know, thank you! If I ever reinstall with LVM I'll keep that in mind!

In the end I gavce up on my own backup script with
$ qvm-block attach
and will try to adapt wyng as a backup tool [1].
I'm still unsure how to feel about 300k+ files per backup (for 50GB), but it works for having *any* backup at least.

Thank you for your help!

-stickstoff

[1] tasket/wyng-util-qubes: Qubes integration for Wyng enables complete VM backup and restore - Codeberg.org

In the end I gavce up on my own backup script with
$ qvm-block attach
and will try to adapt wyng as a backup tool [1].

As I regularly say, I dont see that Wyng provides features that most
users expect from a backup/restore tool.

Want to find a file that you deleted some time in the last month or
year? Restore EVERY backup that you took, and look through the volumes
til you find it.
Want to find a specific version of that file you were working on? Again,
restore EVERY Wyng backup and check until you find it.
Note that if you have also added a large quantity of other data in the
meantime you have to restore ALL that data until you find the one
file you are looking for.

No doubt the Qubes Backup, and Wyng serve a specific use case. It's just
that in my experience it isnt what most users want from their backups.
dd'ing a disk is undoubtedly a good way of preserving data. But it
doesnt serve most people's expectations of a backup/restore process.
Qubes Backup and Wyng - the same.

You should be aware of these limitations and be happy to work with them:
otherwise, put another backup regime in place that WILL serve your
needs.

I'm still unsure how to feel about 300k+ files per backup (for 50GB), but it works for having *any* backup at least.

I'm appalled at the prospect.

In the end I gavce up on my own backup script with
$ qvm-block attach
and will try to adapt wyng as a backup tool [1].

As I regularly say, I dont see that Wyng provides features that most
users expect from a backup/restore tool.

Want to find a file that you deleted some time in the last month or
year? Restore EVERY backup that you took, and look through the volumes
til you find it.
Want to find a specific version of that file you were working on? Again,
restore EVERY Wyng backup and check until you find it.
Note that if you have also added a large quantity of other data in the
meantime you have to restore ALL that data until you find the one
file you are looking for.
  No doubt the Qubes Backup, and Wyng serve a specific use case. It's just
that in my experience it isnt what most users want from their backups.
dd'ing a disk is undoubtedly a good way of preserving data. But it
doesnt serve most people's expectations of a backup/restore process.
Qubes Backup and Wyng - the same.

Ouch. I did not realize this behavior (yet).
Indeed, this sounds more like a last-line-of-defense disaster recovery tool, and not a regular backup&versioning tool.

You should be aware of these limitations and be happy to work with them:
otherwise, put another backup regime in place that WILL serve your
needs.

I like my idea of a backupscript more, which uses
$ qvm-block attach
to attach each vms' private.img to a backup-vm for rsync to a network destination. One file per vm, private.img. Versioning is done on the target, by its regular backup rotation.

I always had unreliable function of
$ qvm-block attach
since I started using it several years ago. I would be happy to put up a four-digit bounty to make it reliable and better documented. But of course I can't even be sure if it's a general problem or solely on my side here.
If anyone wants to invest time into this and/or troubleshooting why it's not behaving on my side, please let me know. Or how to set up a public bounty for this in general.

With wyng, at least I have *any* backup now, which gives me time to figure things out.

I'm still unsure how to feel about 300k+ files per backup (for 50GB),
but it works for having *any* backup at least.

I'm appalled at the prospect.

Thank you. It does feel wrong to me, but I am not knowledgeable enough to prove why ^^

-stickstoff