Backing up my qubes-VMs is soo sloow

Why I am here?

Backing up my qubes-VMs is soo sloow, as
it uses only one if 12 real CPU cores, gzip.
So it is booring without my beloved workstation beeing operational.

BTW I would need to fire up my retired Qubes R3.2 Workstation which is based on 45nm Xeon, which was rendered unusable by cubes 4.0.

So does anyone know a nice console switch for 4 Monitors?
New Workstation has 4 dp ports in one graphics board, fire W5100.
The old has 2 old dvi based Graphics boards, HD2400 or the like.

What console switch could you recommend (4x full hd, ps/2) quad headed?

But fixing the slow gzip would be sweet.

Qubes is a nice desktop os …
But one should be able to clean up smoking debris with if the disk becomes 100% full and no VM starts because of corruption in config files which causes some python script of the many to crash :frowning:
So not usable for beginners in such cases.

BTW there is

Should be possible to accelerate the packing of vms with it.
So every available core compresses another vm image to be included in the tar.

I would suggest to first gzip all the VMs individually and then tar the gzipped images into a tar ball.
This way you can easily parallel the compression step, as you just start gzip for every VM image.

Needs to be investigated.

But this is much better than letting sit one core at 100% gziping a huge tarball of uncompressed vm images.




There is a nice multi processor capable compressor lib, which can also do gz but more than that, more modern and better scaleable compression too:


There is Pigz, parallel implementation of gzip.
So this helps a lot!
So you can use the original approach of tar.gz
So you can tar first and then do a pigz.

But please use all cpu cores, no matter how you implement it.
But I would suggest zstd over pigz as it has more scaleable compression algorithms.

I moved your posts into a separate thread, they didn’t really fit to the topic of the thread you posted them into.

1 Like