Why is the RAM always gone?

I have several whonix-ws based VMs which I use to start tor-browsers inside of. After some time after starting their respective browser, they will always start to fill their max memory and increase CPU-load. The qubes-widget and xentop tell the same story here.

However, when I run free inside the VM, they aren’t even close to 100% memory usage. In fact, increasing max-memory hat no noticeable effect on this issue.
This is mainly annoying because the CPU-fan is very loud when it’s under that much load. Since that is probably caused by some form of swapping, is it possible to run a VM without any swap?

But you’ll need enough RAM and CPU will be stressed more.
Probably there are other ways I didn’t consider.

why-is-the-ram-always-gone

@naverone Bingo! :wink:

@enmus Thank you, I have started a VM with kernelopts “zswap.enabled=0”, let’s see where that leads.

But I’m still interested in diagnosing this problem in more detail. What could cause Xen to think a VM is using all it’s RAM, while VM thinks it’s not?

Should this be considered a bug? If so, of Qubes, Xen or whonix? (haven’t seen it in any other VM, using almost no Debian but some Ubuntu.)

Not sure about increased CPU load but the memory increase you described looks like Xen memory ballooning:

For example, if the Dynamic Minimum Memory was set at 512 MB and the Dynamic Maximum Memory was set at 1024 MB, this would give the VM a Dynamic Memory Range (DMR) of 512 - 1024 MB, within which it would operate. With DMC, XenServer guarantees at all times to assign each VM memory within its specified DMR.

When the host server’s memory is plentiful, all running VMs will receive their Dynamic Maximum Memory level; when the host’s memory is scarce, all running VMs will receive their Dynamic Minimum Memory level. If new VMs are required to start on “full” servers, running VMs have their memory “squeezed” to start new ones. The required extra memory is obtained by squeezing the existing running VMs proportionally within their pre-defined dynamic ranges.

It may be related, thank you for reminding me: Once the VM uses all its memory, it stays that way. So once I run out of memory system wide, i just can’t start new VMs, no squeeze happens. (or at least not in affected VMs.)
As it might be related: Is it possible to turn around the ballooning behavior? i.e. “start at minimum memory, increase when VM needs it!”?

Regarding CPU-load: under normal conditions, the VMs report ~10%, maybe 40% when firefox calculates “stuff”. But under these full-RAM-conditions I’m describing, they are at almost 100% CPU-load all the time, while the VM itself runs just as smooth as ever. Since the physical CPU increases fan-noise, I do believe it’s doing “something”, but the VM doesn’t require/notice it.

Read about mem-set and mem-max here:
https://xenbits.xen.org/docs/unstable/man/xl.1.html
Example usage:
sudo xl mem-set disp1234 2 GiB

Perhaps you’re misunderstanding how qmemman works.