I got a bunch of RAM and it seems most of it is not being used… I’d like to fix that. I’m searching the forums for as many keywords I can think up. Any guidance would be greatly appreciated.
I’ve seen a few posts about using zram and zswap but there is no information on the outcome long term. Much less any change in SSD behavior.
I’m not looking to use all 64gb all the time. Instead I’m looking to improve the responsiveness of the entire system and reduce SSD wear.
On Qubes OS, SSD wear comes mostly from 2 things (from my experience):
the huge amount of logs going to dom0 which is written to disk
swap usage in Qubes
There is not much you can do about the first, except configuring journald to store logs in memory, which is a perfectly valid setting in its options, but the logs will not persist after boot. If you never need these logs (most users should not need it IMO), you could keep them in memory. In journald.conf man page, search for Storage= for explanations. You should also disable (or make volatile) journald in all qubes because they all log to disk, which is discarded because it’s in /var which is not persistent for AppVMs
For swap usage in Qubes, the easiest option is to disable swap from the template, so you are guaranteed it won’t swap. But in case you don’t give your qube enough memory, it will hang badly under memory pressure.
zram and zswap can help reduce swap being written to disk, they work differently:
zswap will try to compress data going to swap, if it compresses then it goes to ram with compression, otherwise it’s sent to disk. It will not reduce much your SSD wear, but it does a good job at improving swap speed when you often has swap conditions
zram creates a fake swap disk totally in ram, with compression enabled, it has a higher priority than the swap disks. You could totally remove the disk swap when using zram. Compression adds a bit of CPU load but can provide a compression ratio of 4:1 in real world usage, which mean if you assign 512 MB of memory for zram, you can store 2 GB in it (at the expense of CPU).
As you have plenty of ram, depending on what you do with Qubes OS, in your case I’d just disable swap disks in qubes and give them plenty of ram.
Thanks for the explainer on zswap/zram. I’m gonna try implementing zram shortly (backups take awhile). It’s my understanding you can set zswap at a higher priority and leave swap in place as well. Responsiveness is my main goal so freezes/hiccups are not desirable.
I should RTFM… Thank you.
SSD wear is not super important, but with as often as I spin-up/down VMs, it seems wise to at least review the lessons learned by the community. Here’s the tweak I made to my templates:
#Debian-12 Templates:
sudo sed -ri 's/#Storage=.*/Storage=volatile/g' /etc/systemd/journald.conf
#Fedora-40 templates:
sudo sed -ri 's/#Storage=.*/Storage=volatile/g' /usr/lib/systemd/journald.conf
To clear out existing crap, I followed up those commands with:
sudo journalctl --vacuum-size=1
I’ve read a few conflicting guides on disabling systemd-journald.service entirely. I tried disabling it on a fedora template and it borked it bad. Is is better to just tinker with storage variables in journald.conf instead?
I switched a few templates to Storage=none without any issue.
You meant zram? If so, by default zram will have a priority a LOT higher than your swap disk, you don’t have to worry about it. For best responsiveness, use lz4 compression algorithm, it’s the fastest although it does not compress really well, it is still able to give a ratio of 3 or 4!
I’ve attempted this in a Fedora-40-xfce template and it seems I’m not able to change the compression algorithm. It appears the formatting/arguments have changes since this was written. Too tired ot investigate tonight. To be continued…
I tried that as well without problems. I figured if I’m gonna do that, I might as well kill the service…
there is a package helper for zram which provides a simple configuration file and a systemd service to start/restart/stop it, it’s way more convenient. Just search for a package using pattern “zram”
What’s the appropriate location to store the zram config in the Templates/Standalones?
In the Fedora github for zram here it mentions locations for zram-generator.conf. What’s the appropriate location in a Qubes template? /etc/systemd/?
Also in the example code, it creates multiple zram devices. I’ve seen in a few places some say to create one zram device per (v)CPU. Would these zram devices be used sequentially or in parrallel? Should I split the total “swap” size I want evenly between the zram devices?
Where can I find details/steps on loading compression algorithms?
compression-algorithm=
Specifies the algorithm used to compress the zram device.
This takes a whitespace-separated list string, representing the algorithms to use, and parameters in parenteses.
Consult /sys/block/zram0/comp_algorithm (and …/recomp_algorithm) for a list of currently loaded compression algorithms, but note that additional ones may be loaded on demand.
I’ve read a few conflicting guides on disabling systemd-journald.service entirely. I tried disabling it on a fedora template and it borked it bad. Is is better to just tinker with storage variables in journald.conf instead?
If you disable journal completely, you won’t be able to check what is going on with the system in case of trouble. Having it volatile is better IMO.