Really disposable (RAM based) qubes

@emacs84

I always use this trick for my RAM qubes to avoid such situations:

Although it is RAM-based, I try to use its “disk” as little as possible because (as you have seen yourself) this consumes RAM from the pool (and from dom0).

Instead, I use a big tmpfs RAM drive and do everything in it - downloading, etc. That allows to use very little from the pool.

For example, I have a RAM qube that has 20GB tmpfs storage but uses only 200M from dom0 with params:

... -p maxmem=0 -p memory=20000 -s 200M

It is a Whonix qube inside which:

user@host:/tmp/download > df -h /tmp/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            19G   12K   19G   1% /tmp

So, the RAM is consumed from Xen, not from dom0.

Here is a handy /rw/config/rc.local for auto-sizing the tmpfs:

#!/usr/bin/env bash

# =============================================================================
# Create RAM drive

# Everything in KiB
free_memory=$(grep MemFree /proc/meminfo | grep -Eo '[0-9]+')

# Prevent deadlock
# https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html
min_free_memory=$(( 512 * 1024 ))
min_tmp_size=$(( 64 * 1024 ))

tmp_size=$(( free_memory - min_free_memory ))
if (( tmp_size < min_tmp_size )); then
	tmp_size="${min_tmp_size}"
fi

mount -o remount,size="${tmp_size}K" /tmp

# =============================================================================
# Custom default dir for storing incoming stuff

download_dir='/tmp/download'
cache_dir='/tmp/.cache'

# shellcheck disable=SC2174
mkdir   --parents \
	--mode=700 \
	"${download_dir}" \
	"${cache_dir}"

chown   1000:1000 \
	"${download_dir}" \
	"${cache_dir}"

ln -sfT "${download_dir}" "/home/$(id -un 1000)/QubesIncoming"

rm -rf "/home/$(id -un 1000)/.cache"
ln -sfT "${cache_dir}" "/home/$(id -un 1000)/.cache"

For simpler qubes, using minimal templates, it can be even more savvy (only 50 MiB):

ram-qube -p template=offline-min-dvm -p memory=500 -p maxmem=0 -s 50M -c xterm

There is also a small script for monitoring your RAM usage in this post:

1 Like