What’s the best and easiest way to stop the -cow files?
I don’t want there to be the private-cow.img or even the .old files… How can I get rid of this without deleting them all the time after I shut down my guests?
What’s the best and easiest way to stop the -cow files?
I don’t want there to be the private-cow.img or even the .old files… How can I get rid of this without deleting them all the time after I shut down my guests?
First of all, I would strongly suggest not using the old “file” storage pool, it has several limitations, is quite slow and storage inefficient. Either use LVM (the default) or file-reflink (on xfs or btrfs), or even ZFS.
But to answer your question - currently there is no option for that, but at one point there will be, once Add an option to use private volume directly, not via a snapshot · Issue #8767 · QubesOS/qubes-issues · GitHub gets implemented.
I don’t use LVM because that is too unstable and just bad for use, too many restrictions and issues. LVM is just way too slow and is also all over the place on the drive.
Good to know something will be implemented in the near future to fix the issue that is built in to utilise the snapshots. One checkbox to just not use that code that is there is a good thing.
BTRFS and XFZ and ZFS…
I don’t want a COW system on my drive, so BTRFS is out. (High RAM usage too)
I don’t use journaling as it is slow and inefficient, so XFS is out on that alone, yet would also be unreasonable due to dynamic allocations and delayed allocations.
I don’t use filesystem raid, I use hardware raid, so ZFS is pointless. (Uses too much RAM as well)
So those filesystems are a little pointless. (In my situation)
but I wouldn’t touch LVM with an 11 ft pole.
The one time Qubes installed itself with LVM for the machines, data was all over the place and couldn’t be handled or sorted easily. partitions were crap.
Having files allows for all data to be stored in a row for the guests, instead of all over the drive.
Just having files on the drive is way more efficient for reading and writing to.
If I had M.2 NVME drives, wouldn’t be any issue at all with the files for speed.
I have no issues with the speeds, I max out my BUS with the amount of things I have running. If I used ZFS I wouldn’t have enough RAM to be able to run more than one guest at a time.
#8767 is just add a checkbox to not execute some code, that’s about 10 minutes work…
I would be happy to hear about what limitations there are on using the file based system that is better for backups, recovery, Qubes Upgrades, and more.
Does anyone yet have any resolution for this issue please?
Templates is qubes require some kind of copy on write. Either in the filesystem itself (btrfs, xfs), via LVM, or via device mapper manually (the cow files). The old file pool driver is deprecated and won’t get any new features, and generally it’s the worst option you could choose (both on the performance, and disk usage side).
I proposed you a better options, which apparently you didn’t like. I don’t have any other.
Why does it require COW?
Why not just direct usage?
That, for the root filesystem of an AppVM based on a template, explains it all, and I’d expect that… But why does it use COW for the private system on a template based AppVM, as well as the root and private of a standalone AppVM that has no template as it’s base?
App qube may also be a template for a disposable. Plus, this allows reverting to an earlier version of the volume (see qvm-volume tool). And take a backup while a qube is running without risking inconsistent state. And clone a qube while it’s running. The list goes on…
Anyway, as said above linking to a github ticket, 4.3 will have an option where you can disable it (and thus also not be able to use anything that would require CoW there).
For a template based AppVM I would have expecte ther eto be a root-cow.img created that held the current temporary changes from the main template for the one guest.
But when it comes to another AppVM that is standalone, and when doing any form of downloading, all of a sudden you have the private.img enlarge by say 30 GB and the private-cow.img enlarge by 30 GB as well, so it increases by 60 GB…
And when you have things loggind, you have 90 guests that are constantly increasing their size for root by a little per minute, then the other machines that increase all the sizes at the times too. So it’s jsut a matter of havingit always causing drive space to be lost for no reason when just writing to the filesystem in the priginal image would be good, and having it as a COW isn’t needed.
Even on a template based AppVM, you don’t need the COW system on the private image as that is individual for each AppVM. It’s only the root system that actually needs it if it’s linked to a template.
So the documentation provided just keeps telling me about the templates and the guests based on templates.
So the documentation still doesn’t clear anything up no matter how many times I read it, whether it be 5 weeks ago, a year ago, or even now that you link what I have read…
4.3… okay… So how can I disable it in earlier versions for all the standalone guests as well as on the private images?
Even if I have to program it in, just let me know what I have to change and where and I can do that.
I mean in the volume config you have the revisions_to_keep and often 0 or 1 is set… rw=True … which is good… save_on_stop as well? Can’t I just alter it to not use COW? i.e. snapshots=0 ?
There is no option for that before 4.3
It has been turned on somewhere, so where is it turned one, where is the options set to say to do it that way then?
It has to be somewhere… Thus, it can be turned off and there is a way.
But where?
will 4.3 require the extra IOMMU/VT-d/AMD-Vi and Interrupt Remapping ?
I get that error with 4.2 saying things won’t function properly even though the CPU and all has that functionality…