Ramdisk in Qubes?

After reading your post a second time I got you.

Actually pretty obvious.

There’s a significant difference between ramfs and tmpfs - ramfs is
monolithic - it will keep sucking up memory. (You can set a maximum with a
kernel parameter but you cant adjust it on the fly.) Once the memory is
used, it’s used.
By contrast tmpfs will give back unused memory as it is freed, and you
can increase and decrease allocated RAM with a remount operation.

There’s a recent patch too which allows for mounting tmpfs with a noswap
option. That should be hitting kernel latest soon, and will let you run
Qubes with swap, alongside qubes in tmpfs with no risk of swapping to disk.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
1 Like

I don’t understand what you’re saying here. If I allocate a 2 MB ramdisk, are you saying it can grow beyond that?

That would certainly be helpful w/r/t being able to utilize xen memory for ram disks as needed via flexible dom0 memory ballooning (vs. pre-allocating a static amount of ram to dom0 to cover RAM disk usage).

Has some possibilities w/r/t truly disposable VM writes as well.

Looking forward to it.


If you use ramfs, yes.
It will keep growing as you use it until you exhaust RAM.

tmpfs will honour the size you allocate.
You can also use df with tmpfs, but not with ramfs.
You can resize your tmpfs disk as you wish, increasing and decreasing
the RAM allocated.

Another important feature - ramfs will not give back memory. If (e.g.)
you download a 2G iso to ramfs, then delete it, that memory is not
given back. The ramfs disk expands as needed, but the RAM wont be freed
up as long as the disk exists.
By contrast, with tmpfs, the RAM usage will be reduced when you delete
the file.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.

This is good to know for future use. I doubt it will burn me in my current situation.

(Presently, I allocate 2 MB–probably way too much) to store one-use encryption keys, the ciphertext file, and the plaintext (decrypted) file, then I dismount the ramdisk–hopefully wiping all that stuff out permanently in the process. Provided dismounting a ramfs actually frees the memory in the VM, there should be no issue; if it doesn’t free then I suppose I could have trouble after 50 or so uses unless I give the Qube more memory.)

I’m looking forward to tmpfs getting that switch. I’d rather use it, even under current circumstances.

As an aside while testing for a while there I wasn’t freeing the disk in my script (because I wanted to see what was on it). So there were often a dozen or so ramfs disks mounted on top of each other at the same place. Not until I did umounts over and over again did I clear that. Now my script will actually umount repeatedly until it fails to umount (because the last one is cleared) just in case it fails before umounting on a previous call.

1 Like

The noswap feature for tmpfs is mentioned in kernel.org’s documentation but as far as I can tell not available in 6.3.2-1.qubes kernel. Too bad.


I can think of another use for a ramdisk qube.

If you want to copy large files (several GB) to offline storage and the files don’t start out on a VM that has access to the storage, you have this situation:

Qube A: Has the large files on it.
Qube B: Has access to offline storage.

On qube A do a qvm-copy of the file to qube B. qube B can then just copy from QubesIncoming/A to offline storage. The disadvantages are qube B needs a lot of storage and your HD/SSD sees some gratuitous use; the file has to be copied from one place on your HD/SSD to another, before it is finally copied off the disk entirely. That intermediate step would not be necessary on a more mundane OS. If you do this frequently enough it could shorten the life of your SSD, since there’s only so many writes it can take. But if qube B is in RAM there’s no issue (provided, of course, you have enough RAM).

Even if qube B is in regular storage, but simply mounts a huge ramdisk to QubesIncoming/A, that could be of benefit.

Not having experimented (yet), I am guessing the ramdisk can’t be larger than the memory allocated to the qube (it does not come from dom0’s memory but rather the qube’s, right?). The issue of limited ram could be alleviated by copying in pieces (like you used to have to do with downloads or floppies). At which point it’s probably not worth the trouble

I said it would be hitting kernel-latest soon, not now.

1 Like

Got you. I was just wondering loud about the premature documentation on kernel.org. Maybe I’ll try out 6.4-rc4 (outside of dom0) later.

I thought about this, too. AFAIK repeatedly starting VMs could contribute to wear leveling of one’s SSD, i.e. when disp-mgmt-$template is created to update $template. Had me wondering if I should less often update my templates. Modern SSD are supposed to take a lot of writes, though.

So, yes, using ramdisks for short lived VMs or VMs handling large files could be an alternative.

I’ve created a vm-kernel and named it ephemeral, patched it’s initramfs and the dispVM-python script, and tried the ephemeral kernel on a couple of VMs but wasn’t successful. No dmhome device in the aforementioned VMs.

Has anyone else tried the patches?

Is this still under active development? Is it possible to use qvm to create a

tmpfs based dispVM?