How To Turn Off Swap & The Cost Of Doing So?

I’m trying to prevent any sensitive data from ending up in swap.

I suspect the best solution is just to disable swap all together.

Is this the way to go, or is there a better solution?

And if it is, how do I do that in Qubes 4.1?

And if I do it, what can I expect to change in terms of performance with say 32gb RAM, for average Qubes use (3 or so Qubes running at the same time)?

1 Like

you could comment out the swap partition in /etc/fstab and see were it goes. 32gb should be plenty for 3 qubes. a good rule of thumb i use is 4gb per vm which has gui.

1 Like

I imagine I do this in dom0, and it would carry over to all the qubes?

Changing it in dom0 will only impact dom0.

If you want to disable swap everywhere all Templates and Standalone VMs would also need to be similarly modified.

B

The only thing that persists in an app Qube is home correct?

If so, do I have the same swap partition risk in an app Qube as dom0? If I shutdown the qube, can I assume the swap clears with the shutdown?

The volatile volume in LVM is removed, yes.

Is it forensically cleared? Not intentionally**. Volatile volume content (e.g. swap content) may continue to exist in the unallocated space in the LVM thin pool until overwritten.

The same applies to the content of the snapshot volume for root (that is, changes to root volume since start, such as logging). That too can stick around in the unallocated space in the LVM thin pool until overwritten.

The same, generally, applies to disposable VMs as well (adding the additional data from the private volume to the mix as well).

This is why the Qubes developers are looking at adding ephemeral encryption (via throwaway keys) for volumes and snapshots that are not meant to be kept past the current session.

Indeed, there’s a flag to enable ephemeral encryption for volatile volumes only (the simplest to implement) in 4.1 that isn’t currently enabled, that might be usable … I think it’s still considered experimental, may not be completely working. It’s a property of the pool, I believe? See: System-wide toggle for encrypting volatile volumes · Issue #6958 · QubesOS/qubes-issues · GitHub

Work continues on supporting it (ephemeral volatile), see marmarek’s commits over the last year or so referenced before the most recent conversations here: DisposableVMs: support for in-RAM execution only (for anti-forensics) · Issue #904 · QubesOS/qubes-issues · GitHub

The solution for the other volume types (root-snapshots, private) is still in design stage.
B

** assuming a standard hard drive, the (luks encrypted) data simply persists on disk until later overwritten, but could be seen as plain text by a forensic examiner who has access to your password, dom0 luks key, login session or a decrypted image. However, there’s what I call an “opportunistic anti-forensics” feature that might clear the data for you. If you are using an SSD/storage device that supports discard/trim and at every layer [storage hardware (e.g. SSD), luks, lvm, etc.] you and/or qubes developers have configured the system to pass discards down, then the SSD hardware will receive Trim/Discard commands when an LVM volume is removed by the qubes lvm storage driver. On most hardware this will, within a very short amount of time, erase the memory cells containing the data the volume contained. This is not a cryptographic guarantee, but relatively solid.

3 Likes

I take it this is not default on 4.1? And I need to setup the config myself?

…Would the same apply for encryption outside of the top level Luks enryption… such as mounted Veracrypt vaults? If they don’t have the Veracrypt keys, the data that might end up in swap would be encrypted with those VC keys and not viewable with the top level Luks keys? Or does unencrypted VC data in RAM potentially end up in swap?

It wasn’t the default to flow it all the way to the hardware on R4.0 a couple years back. IIRC, I had to enable it for LUKS at the time…memory is fuzzy.

However, looking at fstab in a recently installed R4.1, LUKS is configured to pass discards down. LVM is too, by default under both versions. So you should be good, if your hardware supports it.

Notably issue_discards is still = 0 in lvm.conf, but that does not apply to thin LVs, only normal LVs. Under standard install of Qubes, it’s only useful to set that to 1 if you are removing thin pools regularly (I do), as a thin pool is build inside of a normal LV object.

B

1 Like

Hmm, if you’re mounting volumes directly in dom0, then no, I wouldn’t make a claim that the passwords, keys or plaintext from veracrypt volumes would never end up in dom0 swap. It really depends on how you are using the data in dom0 and how the memory used to store that data is flagged in the kernel.

If you were mounting the containers in domU VMs then it would be much less likely to end up in dom0 swap, since VM memory is Xen memory and can’t be swapped by dom0 kernel unless the VM sends it to dom0 (e.g. via some channel or introspection, the most critical is likely to be the display of a domU window in dom0).

1 Like

I’m trying to do the same for a different reason.
My reason: increasing ssd lifespan by minimizing write actions

Have you looked into swappiness?
/etc/sysctl.conf

#  0: swap is disable
#  1: minimum amount of swapping without disabling it entirely
# 10: recommended value to improve performance when sufficient memory
vm.swappiness = 60

60 is the standard, which means that when 40% of the ram is in use, it starts swapping
I wonder if the 10% recommendation is still up to date… as for a system with 64G ram… that means while there is still 6G available… swapping starts (which slows your system down and wears the SSD’s).
I think I’ll set it to 1 and see what happens, unless anyone can share a good reason why not to.

I am not sure whatever you’d set there it would disable swappiness before the limit is reached.
I read that the better solution is to leave swap as is, to create zram block device to use it as swap, and to set SSD swap with a lower priority.
This would increase overall speed, but would prevent crashes when zram is exhausted. It should also help to OP to reexamine her goal.
Sounds beautifully, isn’t it.

1 Like

Also it’s not clear to me how memory ballooning and Linux swap interact (dom0 participates in ballooning by default). Generally I think ballooning takes precedence over swap.

Also, I’ve found it very hard to get domUs to swap onto their own volatile LV in general.

B

It looks like swapping might start almost immediately after ballooning starts too. Given the fact (?), one reason more to try zram to avoid swappines and performances hits?

I haven’t been able to get zswap to make a difference in my testing.

I guess most of us would like to hear more about the testings to see if it’s a myth and busted?

Actually I decided my self to try this for the totally different reasons than already stated. I specifically disabled zswap in kernel command line in favor of zram with swappines set, yes, to 100. I hope I’ll post my findings on a separate topics

As promised