ZRAM Swap on Fedora 42 using Qubes 4.2

I’ve tried to set up zram swap on Fedora 42 under Qubes 4.2 and it seems to be as easy as just running sudo dnf install zram-generator-defaults. Not sure if the default configuration is optimal, especially wrt. memory balooning. LLM has suggested me to adjust vm.swapiness, but the default value (60) LGTM.

Do you think that the default config needs some further adjustments?

EDIT: It seems that the ZRAM size isn’t adusted with memory balooning:

  1. Start a fresh DVM with just a terminal (=> low memory usage)
  2. Run htop and/or watch cat /proc/swaps and see initial values of swap size. In my case, htop showed 1.33GiB of swap total, which implies about 0.33GiB of zram swap, as there is a default ~1GiB swap.
  3. Start something memory intensive. Maybe open Firefox and open many tabs. Maybe just dd from /dev/random to /tmp, but it will probably hit some cap.
  4. Observe how the total swap size and total memory change. In my case, total swap size stayed at 1.33GiB, even though total memory usage was multiplied several times and even when the system was quite low on memory.

This usually doesn’t make zram swap harmful (maybe unless you have higher initial RAM assignment than actual memory assignment, which isn’t typical for me), it makes it just less useful for some cases.

So, maybe we need a daemon which observes total available memory and adjusts the zram swap size accordingly?

Thinking further, swap content is IIRC included in qube’s memory demands. This might be suboptimal:

  • For traditional swap, it is somewhat justified, because the qube could probably benefit from more RAM than currently assigned. Well, this can be still questioned and maybe there is room for some optimizations, i.e., we could multiply it by some factor.
  • For zram swap, it IIUC implies that the occupied memory is counted twice – once as actual RAM usage (as compressed data size) and once as swap usage (probably as uncompressed data size). Well, I haven’t confirmed this behavior experimentally, maybe there is something that prevent this behavior, but I suspect that qubes that use zram swap will demand more actual RAM from memory balooning. This isn’t an issue when you would hit max memory anyway, but in some scenarios, it can prevent other qubes from getting enough RAM, or from being started.

So, maybe we could consider an adjustment for qubes-meminfo-writer to add configurable weight support for swaps. It would however require to read swaps from /proc/swaps instead of /proc/meminfo, because meminfo shows sum of all swaps.

EDIT 2: It seems that I cannot resize zram swap while being used. So, maybe we could have multiple zram swaps and add/remove them on the fly. Or maybe, we could just be OK with ZRAM size being based on initial RAM assignment and adjust config based on it.

3 Likes