First, thanks to the Qubes team and @tasket for such a phenomenal ecosystem. <3
I was previously using Qubes backup to perform full backups to two external disks, practicing 3-2-1 backup standards. 3 copies, 2 medias, 1 off site location (in my case: live dataset on site, backup nvme offsite location 1, backup nvme 2 offsite location 2).
Recently I’ve been tinkering with wyng, and it’s brought up something interesting. The latest backup attempt resulted in:
/mnt/backup/wyng/bin/wyng-util-qubes backup --includes --dest=file:/mnt/backup/wyng/office.backup
wyng-util-qubes v0.9 beta rel 20241022
Enter passphrase:
Error code 1:
Wyng 0.8 beta release 20240827
Traceback (most recent call last):
File "/mnt/backup/wyng/bin/wyng", line 5054, in <module>
aset = get_configs(options) ; dest = aset.dest
^^^^^^^^^^^^^^^^^^^^
File "/mnt/backup/wyng/bin/wyng", line 2230, in get_configs
aset = get_configs_remote(dest, cachedir, opts) ; os.utime(aset.path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/backup/wyng/bin/wyng", line 2277, in get_configs_remote
raise ValueError(f"Cached metadata is newer, from {cache_aset.path}\n"
ValueError: Cached metadata is newer, from /var/lib/wyng/a_927ca50c81587b1fa34dd51332cba666e87f89fa
1736873577.6630716 vs. 1736159876.9661942
As I understand wyng, this is because the latest metadata + snapshots are associated with the blocks on the other backup nvme, not the old ones on this one.
Would –meta-dir help in this case or still fail as the snapshots are also associated with the other backup nvme?
Is there anyway to have two sets of backup snapshots + metadata? Like –meta-dir backup-nvme1 and –meta-dir backup-nvme2?
Anyone else encounter this? My google-fu hasn’t turned up anything fruitful.
Hi! Normally you would only encounter this error if the archive was “rolled back” to an earlier state, and the error is designed to protect against rollback attacks.
However, it could be that you copied the archive to a different disk and are mounting both disks (in turn) at the same path. Wyng stores local metadata under a dir like “/var/lib/wyng/a_927ca50c81587b1fa34dd51332cba666e87f89fa” which is a hash of the --dest URL (including remote path) and the archive’s UUID. Copying an archive causes the copy to have the same UUID, and mounting the backup disk at the same path would make everything align so that Wyng looks in the same metadata dir for both archives. Then when you go through your backup disk rotation Wyng sees an archive that looks rolled-back because the timestamp is older.
You could try giving one of the archive copies a unique UUID with the command sudo wyng arch-check --dest=<URL> --change-uuid, then Wyng will use a separate metadata dir automatically. The only catch with this is when your local system is using Thin LVM, in which case it can’t pair snapshots with multiple archives and it will ask you to use --remap; this causes a slowdown during backups but shouldn’t be an issue if you don’t rotate between archives for every backup.
Alternately, you could use a different approach by doing Wyng backups to only one disk and syncing copies of that archive to other disks using rsync (see the suggested use in the Wyng Readme doc). I am developing an alternative file sync method that is more efficient but for now the basic rsync method works.
BTW, feel free to open an issue on github if my above guess about the cause was wrong or you otherwise need help resolving the problem.
@awren.andir I’ve added an option --force-allow-rollback to the Wyng 08beta branch which can be used to recover from the ‘Cached metadata is newer’ state.
They are indeed mounted at the same path. Both disks get inserted to a hot swap pci-e nvme bay, decrypted, and mounted to /mnt/backup and they have the same archive path.
So I could continue with the two disks, but each bacup would require a fullscan each run?
This might be the most efficient, actually. I drop a nvme into the machine and only run backups to it - then sync those over to the rotation nvme disks weekly.
Thanks for your insight!
Excellent, I wasn’t sure if that was appropriate and saw some others posting wyng questions here.
I think I like the single disk backup / rsync option. It will also allow me to schedule the backups so I can automatically shutdown qubes, run the backup, then apply updates so Monday mornings can be unpredictable
Actually, I’ve never had a problem with a qubes update in the 7+ years using it so I guess I’ll have to work instead.
Yes. To summarize: If Qubes is on LVM then rsync is probably the most efficient.
OTOH, if Qubes is using Btrfs or XFS then Wyng will create additional snapshots to track multiple archives; all you would have to do is change one archive’s UUID to be able to backup to each independently.