Prefmem = (pages in use+pages in swap (sans cache/buffers)) x 1.3 and large maximum memory VMs

Rather than bring this to qubes-issues I thought it better to start discussion here. Esp. as I’m not familiar with the history of how this algorithm was chosen.

As per Qubes 4.0 design, memory balancing will never ask a VM to return memory to Xen that would reduce memory allocation to a VM below the dynamic value of “prefmem” (calculated as per algorithm in the subject line).

This 1.3 multiplier is a global configuration value found in dom0 /etc/qubes/qmemman.conf that can be changed by the owner of the system.

I have found that, when using a combination of fixed memory VMs (e.g. windows) and variable memory VMs with large maximums, this algorithm leaves a lot of unused RAM unavailable for starting up new fixed memory VMs.

A quick change of the value to 1.2 or 1.1 and restarting the qmemman service alleviates the issue, though perhaps at risk of triggering memory issues in smaller memory-footprint VMs.

It might be worth considering implementing either a sliding scale of multipliers based on current in-VM memory usage (e.g. 1.3 for < 1GB; 1.2 for 1-2Gb; 1.1 for 2-4GB; 1.05 for > 4GB) or, alternately 1.0 plus some fixed value. Or some combination of both.

It would also be nice to allow some Linux VMs to exclude swapped anonymous pages from the prefmem calculation.

The tested system has 32GB of RAM. I theorize the above giving 16GB or less RAM users a bit more flexibility.

Brendan

1 Like

Any thoughts on whether this simple algorithm might need to be re-examined?

[After two semi-recent qubes-issues I had created were closed (rightly) because I had not done my due diligence, I am trying to approach this question in the correct manner, by posting to the forum and/or list for discussion.]

B