Backup Freezes at 20%

When trying to create a backup from qubes backup manager, or cli, it freezes at 20.94%. I have let it sit for hours and it doesn’t progress further. I am using an ext4 formatted external drive in sys-usb.

Did you added qube to which backup is saved to backup list?

Does this always happen or was this a one-time occurence?

Related issues:

I did at first, then I tried without that qube but the issue persists.

Thank you. It happens every time, I haven’t been able to successfully make a backup. I tried removing dom0, sys-usb, and qubes size 0B, but it is still freezing at 20% every time at the exact same file size for the backup.

Backing up to dom0 (instead of to a VM like sys-usb) should reveal the underlying error. If you don’t have enough free space for that in the dom0 filesystem, there’s a workaround:

Just an idea:
Is the file size approximately 4Gb? It can happen for fat32 storage devices - this is the maximum file size.

(same problem seen here. )

2 Likes

I created the dummy-compress file and set it to executable, but as the backup progresses the file size increases. Is it supposed to be an empty file? I am afraid if I let it go it will take up all of my disk space

It’s ext4 formatted, it cuts off at exactly 35.5GB each time.

Also, if I back up an individual qube it completes.

1 Like

Not totally empty, but it shouldn’t grow beyond a few KB. Did you remember to pass the --compress-filter=dummy-compress argument to the qvm-backup CLI? (The qubes-backup GUI can’t be used for this.)

Yes - I’m sorry - after reading again, I see you said ext4 already.

I wrote this message before, but did not send it. I send it now, in case you did not find a solution.

I would expect the problem to be in the qube where the disk for saving is attached.

(The test of @rustybird is to send the backup file into a black hole, to eliminate the saving of the file. It sends the backup to a program, not a file - the /usr/bin/dummy-compress is to run as a program which sends input to the black hole. dummy-compress file must not get filled with data. )

A common error is to select a destination that is not the external disk with lots of space. The qube gets filled, not the disk.
For sys-usb, there is normally only a small amount of storage. It would be unusual to have 35Gb of space.

If you make two or more smaller backups, can you successfully make a total more than 35.5Gb?

You could try to open some terminals in the destination qube, and watch for problems…

  • In one, I would run df or watch df - we should see space decrease in the destination device. Maybe add the path of the backup destination to the command, if there are too many lines.
  • In another terminal, run sudo journalctl -f - to look for any out-of-memory or hardware problems.
  • top shows active processes and memory use.

Hi, thank you for your replies. I sat down again today to try to sort it out, so I tried doing it to dom0 with a known-safe usb drive attached. I was able to back up a couple of qubes, but one time it froze and hung after creating what looked like a viable backup, but when trying to verify with restore, it hung and refused to abort (might have been pushing it with other qubes open taking up resources).

I am going to go through and backup in batches, but one of my AppVMs gave me an error and crashed the backup altogether.

Backup Error: Failed to archive <bound method ReflinkVolume.export of 'varlibqubes:appvms/<appvm-name>/private'> file

I think that might have been what was preventing it, but I would still like to back it up. Does the error tell you anything helpful?

Do you consistently see the error if you try to back up just this one AppVM to dom0? If you look at the dom0 system journal in another terminal while this is happening (sudo journalctl -f), it might have more context.

Something seems to be wrong with the file at /var/lib/qubes/appvms/<appvm-name>/private.img but the error doesn’t say exactly what.

This time it got up to 20%, but it gave the same error

Great that you can reproduce it. Hopefully there’s something interesting in the journal? The output of ls -l /var/lib/qubes/appvms/<appvm-name> might have something relevant too

There is the private.img file, as well as a file called private-precache.img and one with a timestamp: private.img.194@2025-06T21:32:25Z

Assuming there’s nothing interesting in the journal (?), try

$ sudo tar -Pc --sparse /var/lib/qubes/appvms/<appvm-name>/private.img | cat >/dev/null

to see if that produces an error. (The seemingly useless pipe into cat is required to avoid a tar optimization for /dev/null.)

Thank you, it did throw an error

tar: /var/lib/qubes/appvms/my-app-vm/private.img: Read error at byte 3714543616, while reading 512 bytes: Input/output error
tar: exiting with failure status due to previous errors

Probably due to bit rot, and Btrfs noticing the checksum mismatch.

That should definitely result in error messages in the system journal as well, specifically from the kernel. sudo dmesg or journalctl -k can be used in dom0 to read only the kernel messages.

1 Like

Seems to be the case

I/O error, dev sda, sector 105584488 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
BTRFS error (device dm-0): bdev /dev/mapper/luks-luksuuid errs: wr 13, rd 69, flush 0, corrupt 0, gen 0

Will btrfsck fix it?