I’ve tried to set up zram swap on Fedora 42 under Qubes 4.2 and it seems to be as easy as just running sudo dnf install zram-generator-defaults. Not sure if the default configuration is optimal, especially wrt. memory balooning. LLM has suggested me to adjust vm.swapiness, but the default value (60) LGTM.
Do you think that the default config needs some further adjustments?
EDIT: It seems that the ZRAM size isn’t adusted with memory balooning:
Start a fresh DVM with just a terminal (=> low memory usage)
Run htop and/or watch cat /proc/swaps and see initial values of swap size. In my case, htop showed 1.33GiB of swap total, which implies about 0.33GiB of zram swap, as there is a default ~1GiB swap.
Start something memory intensive. Maybe open Firefox and open many tabs. Maybe just dd from /dev/random to /tmp, but it will probably hit some cap.
Observe how the total swap size and total memory change. In my case, total swap size stayed at 1.33GiB, even though total memory usage was multiplied several times and even when the system was quite low on memory.
This usually doesn’t make zram swap harmful (maybe unless you have higher initial RAM assignment than actual memory assignment, which isn’t typical for me), it makes it just less useful for some cases.
So, maybe we need a daemon which observes total available memory and adjusts the zram swap size accordingly?
Thinking further, swap content is IIRC included in qube’s memory demands. This might be suboptimal:
For traditional swap, it is somewhat justified, because the qube could probably benefit from more RAM than currently assigned. Well, this can be still questioned and maybe there is room for some optimizations, i.e., we could multiply it by some factor.
For zram swap, it IIUC implies that the occupied memory is counted twice – once as actual RAM usage (as compressed data size) and once as swap usage (probably as uncompressed data size). Well, I haven’t confirmed this behavior experimentally, maybe there is something that prevent this behavior, but I suspect that qubes that use zram swap will demand more actual RAM from memory balooning. This isn’t an issue when you would hit max memory anyway, but in some scenarios, it can prevent other qubes from getting enough RAM, or from being started.
So, maybe we could consider an adjustment for qubes-meminfo-writer to add configurable weight support for swaps. It would however require to read swaps from /proc/swaps instead of /proc/meminfo, because meminfo shows sum of all swaps.
EDIT 2: It seems that I cannot resize zram swap while being used. So, maybe we could have multiple zram swaps and add/remove them on the fly. Or maybe, we could just be OK with ZRAM size being based on initial RAM assignment and adjust config based on it.
I’ve come across this issue before. My initial workaround, which works well enough is to just set zram size do ram * 5 or ram * 10, as this takes the initial value and results in a decently-sized zram device.
But to match defaults, shouldn’t it be zram-size = maxhotplug / 2048 (default for zram-size is min(ram / 2, 4096)) ? But OTOH, I see zram-generator-defaults in Fedora uses zram-size = min(ram, 8192), so maybe this is okay?
What version of zram-generator is needed for this set! syntax? I guess it will fail if it’s too old? This may need excluding on Debian in such case, or even specific Debian versions (if it’s too old there).
Oh, and also, xenstore-read memory/hotplug-max should be enough.
What version of zram-generator is needed for this set! syntax? I guess it will fail if it’s too old? This may need excluding on Debian in such case, or even specific Debian versions (if it’s too old there).
But OTOH, I see zram-generator-defaults in Fedora uses zram-size = min(ram, 8192), so maybe this is okay?
Yeah, that’s the min(ram, 8192) is the Fedora default. To be frank I haven’t managed to find an explanation for why it’s limited to 8192, so I chose just ram, but I can adhere to that as well, assuming Fedora maintainers know best.
Oh, and also, xenstore-read memory/hotplug-max should be enough.
Nice shorthand, thanks. I’ll make a PR to zram-generator with this first, adding this and updating the zram-generator.conf.example as it has an error.
IIUC, the configuration can create ZRAM swap larger than total memory, is it true? It feels a bit odd, but it might be fine. IIUC, it can cause an overcommitment (which is generally the case of ZRAM swap) and writing to ZRAM can fail, so another swap is being used. Am I correct?
Maybe the Fedora’s default 8G doesn’t make sense for Qubes OS. Fedora’s use case are general-purpose OSes. With Qubes OS, we can have tiny qubes like (disp-)sys-net, with much lower memory requirements. Although I haven’t seen the justification for 8G default, I doubt it is applicable to Qubes OS.
EDIT: I am considering rewriting the qubes-meminfo-writer, in a way that allows customization of weights for RAM, standard swap and zram swap. Not sure when/if I find time for that.
I believe you are correct. Overcommiting zram is usually fine (as long as it does not get larger than 1.5x-2x the size of your physical memory) and overflow should go to the physical swap device.
You do raise a good edge case, though. Say a user has their qubes configured with huge maxmem values, but runs many at the same time, resulting in most of them being allocated smaller amounts of memory. This could cause the zram device to “eat” the whole RAM under memory pressure. I’m not sure what happens then, but I would guess that it’s not worse than having high memory pressure and constant disk swapping (so also an almost-unusable system).
Max may make sense, I assume they did some research for this. But I more wonder about ram vs ram / 2 difference, especially in case of Qubes OS where not the whole maxhotplug may be available in practice. Do you know why the default in Fedora is to use all RAM?
Not really. My assumption is that due to zram’s great compression gains (2:1 to 3:1) it just makes sense and “works”. Back when I used Arch, I also had zram set to 1.5 * ram and I never ran into problems, even when zram filled up to 100% (OOM killer kicked in as it would in any other situation, no other instability was observed)
I had a reaction for this, but I got unsure after a LLM’s review. The main point is whether swap size affects the willingness to use it. I don’t think so, but I am not sure.