Really disposable (RAM based) qubes

@Resende
This version works well, but the DispVMs folder remains in the HOME folder and not in the in a subfolder (which would be more convenient and clearer) :confused: . And I don’t dare touch the scripts of the topic anymore! lol.
In any case, good job :wink:

EDIT: your version, doesn’t remove ram_pool of DispVMs. The most successful script is that of @qubist :slight_smile:

Thanks for the feedback @Tezeria

I’ve just rebased the code with qubist’s version, and now it successfully removes DispVMs ram_pool.

The DispVMs folder will be created in the “/tmp/shadow-qube/” directory instead of $HOME.

Additionally, I added the --ephemeral <true,false> option. You can use it in combination with the -k <kernel-ephemeral> option to encrypt content in RAM.

@Tezeria

The only thing is that the DispVMs folders are stored directly in the HOME folder and not in the.tmp.

That’s how it was made to be. It was not a bug. “Fixed” now (all in ~/tmp/). Check latest edit of the OP.

@Resende

Your version changes also the default value of memory. Why is that?

This:

qvm-run "${qube_name}" "${command_to_run}" && wait

means that wait will be executed only if qvm-run returns a non-zero exit status, i.e. in case of killing the VM (or other error) - it won’t. Considering that wait returns the termination status of the process it waited for, this is somewhat confusing. Actually, now that I am looking at all this, I question if wait is necessary at all, as it is normally needed if a process is explicitly run in background - something we don’t have here. I am removing it.

Since the goal is to clean up regardless of qvm-run’s exit status, it seems to me more correct to use:

set +e
qvm-run "${qube_name}" "${command_to_run}"
set -e
[...]

I have updated the OP with that, including your addition (with some editing) of your kernel and ephemeral options. One thing I notice though (hence the EXPERIMENTAL note): using ephemeral makes it impossible for me to run programs in the target qube. I don’t know why. I can neither start torbrowser, nor xterm. The qube starts and cleanup works but that’s all.

Could you clarify why that is so? (I have no experience with ephemeral)

1 Like

Hello thanks you for your hard work but i have some trouble with the script and i think i’m not the only one.

First of all i’m using Qubes 4.2.0-rc4 testing and the script is only working with whonix template.

  1. The script can’t run other commands on other templates for examples i got “command failed with code 127” if i try to launch a terminal disposable. If i try to start a flatpak command with the script like “bash XXX.sh --template X-dvm -c flatpak run org.mozilla.firefox -n mirage-firewall” the script will say command failed with code 1.

  2. Something very strange I’ve noticed ephemeral option is working but not every time. If i add “ephemeral true” Whonix VM will start but that’s all. I don’t see torbrowser opening BUT if i remove ephemeral option Whonix VM will start easy without trouble. But another thing strange is yesterday ephemeral option was working perfectly on Whonix vm… i don’t understand what happen between yesterday and today.

  3. Before your script i was using GitHub - kennethrrosen/qubes-shadow-dvm: Simple dom0 bash script inspired by Unman's 'Really Disposable Qubes' scripts for every VM and it’s working smoothly flatpak command is running without problem and whonix too. I don’t understand why with your script, i can’t run flatpak command because its based from qubes shadow script lol

And sorry for my english it’s not my language :sweat_smile:

@qubist
DispVMs are created in the $HOME/tmp folder but with the $HOME/tmp/dispXXXX/appvms/dispXXXX/volatile.img architecture instead of $HOME/tmp/dispXXXX/appvms/volatile.img in previous versions. Is that what you wanted to do?
Other than that, everything is ok! :slight_smile:

Same thing for me.

i’m using Qubes 4.1.2 and everything is ok. I don’t use flatpak so i can’t test it…

@qubesuser1234

I am using only Qubes 4.1.2. The script was never tested by me on any other version.

  1. […]

I am getting the same for your flatpak command but I can run other commands on different templates fine. If the command ends with status code different from zero, this means the command itself fails for some reason. The script merely creates a disposable VM and runs a command in it. To my mind, if the command fails, it should fail regardless of whether the VM was created manually or through the script.

  1. […]

I have no experience with ‘ephemeral’. I added this only because it was suggested by others. As noted by myself, I also face issues with it, hence it is marked EXPERIMENTAL.

  1. […]

I have not used the scripts of others. They were only used as a starting point (reference). If you find an actual bug with the current script, please describe it (STR, expected, actual).

@Tezeria

The script creates only /tmp/qube_name. Everything below that is created by qvm- tools.

2 Likes

Local change I forgot to undo before sending it here, sorry about that.

You’re right.
I misunderstood the reason there was a wait in the early version of your script, as it seemed to prevent the removal of remnants without user interaction (Ctrl+C). I thought it would make more sense to wait for user interaction before starting to delete anything in the scenario where qvm-run returns a non-zero exit status, but I don’t see when that’s the case.

One more thing, I’ve noticed that the default private storage size is 2GB. In this case, wouldn’t it be necessary to run qvm-volume extend ${qube_name}:private ${tempsize} after creating the qube?

Once you create a qube with the patched kernel version, you may encounter issues when trying to run commands. Usually this happens because the qubes-mount-dirs service fails while attempting to locate /dev/xvdb for mounting /rw. However, the ephemeral kernel creates /dev/mapper/dmhome for this purpose, so /dev/xvdb is already in use. You can resolve this by cloning the TemplateVM you need to use with the ephemeral kernel and editing the following:

at /usr/lib/qubes/init/mount-dirs.sh line:10

-- if [ -e /dev/xvdb] ; then mount /rw ; fi

++ if [ -e /dev/mapper/dmhome ] ; then mount /rw ; fi

at /usr/lib/qubes/init/setup-rwdev.sh line:9

-- dev=/dev/xvdb

++ dev=/dev/mapper/dmhome

at /usr/lib/qubes/init/setup-rw.sh line:3

-- dev=/dev/xvdb

++ dev=/dev/mapper/dmhome

at /etc/fstab line:5

-- /dev/xvdb  /rw  auto noauto,defaults,discard,nosuid,nodev 1 2

++ /dev/mapper/dmhome  /rw  auto noauto,defaults,discard,nosuid,nodev 1 2

@qubist thanks for your time and effort.

Script is now running fine after a reboot. I guess this was a bug from Qubes testing maybe (?) Now i can run other commands on different templates without trouble i’m so happy ! Only ephemeral and flatpak command is still not working.

And thanks u for everthing you do to try to help !

You might want to add an option to disable memory balancing.

...
	 -m, --memory          Qube memory in MB [1000]
	 -x, --maxmem          when 0 dynamic memory management is disabled
	 -v, --default_dispvm  Default disposable template [none]
...

...
memory='1000'
maxmem='2000'  # when set to 0 dynamic memory management will be disabled
default_dispvm=''
...
...
		-x | --maxmem)
			   maxmem="${2}"
			   shift 2
			   ;;
...
...
qvm-create --class DispVM \
	   "${qube_name}" \
	   -P "${pool_name}" \
	   --template="${template}" \
	   --property netvm="${netvm}" \
	   --property memory="${memory}" \
	   --property maxmem="${maxmem}" \
	   --property default_dispvm="${default_dispvm}" \
	   --property kernel="${kernel}" \
	   --property label="${label}"
...

modified original code: removed qvm-service meminfo-writer; set maxmem instead.

@whoami
With your modification to disable memory balancing, I get the error “qvm-service: error: unrecognized arguments: meminfo-writer on”

PS: I replaced “-d” with “-i” because the variable “-d” is used by “–tempdir”.

Thanks for your feedback. I will check.

I have it working. I will edit my previous post.

For me, everything is ok :slight_smile:
Apps work fine, folder ~/tmp and files /var/log/qubes/*dispxxx.log are erasing once apps are closed! :smiley:
Very good work!

@Resende

One more thing, I’ve noticed that the default private storage size is 2GB. In this case, wouldn’t it be necessary to run qvm-volume extend ${qube_name}:private ${tempsize} after creating the qube?

Looking at disposables created by the script, it seems unnecessary:

user@dom0:~ > qvm-volume info disp2479:private | grep size
size               2147483648

Regarding ephemeral:

The man page of qvm-volume says:

          · ephemeral - should the volume be encrypted with en ephemeral key?  This
            can   be   enabled   only  on  a  volume  with  save_on_stop=False  and
            snap_on_start=True - which is only volatile volume. When set,  it  pro‐
            vides a bit more anti-forensics protection against attacker with access
            to the LUKS disk key.  In majority of use cases, it only degrades  per‐
            formance due to additional encryption level.

My observations:

user@dom0:~ > qvm-volume info disp2479:volatile snap_on_start
False
user@dom0:~ > qvm-volume config disp2479:volatile snap_on_start True
Invalid property: snap_on_start

Why doesn’t this work? I don’t know. Without it ephemeral seems inapplicable.

Also:

What exactly is “a bit more”? If someone has the LUKS key, how exactly will this help anything? Without detailed explanation it sounds very unclear.

Once you create a qube with the patched kernel version, you may encounter issues when trying to run commands.

I have not done that patching. It seems too risky to make this script depend on another set of scripts, especially as they touch dom0 and the benefits are questionable.

Considering all that, it seems better to remove the ephemeral option for now. We can always add it when there is clarity and it works predictably, without the need of patches.

What do you think?

@whoami

You might want to add an option to disable memory balancing.

What does this disabling improve?

@whoami
I’ve noticed that indeed, everything works perfectly when I close the DispVM applications as well as the Dispms directly. On the other hand, if I turn off my laptop WITHOUT closing the DispVMs, then I still have traces, by making a “qvm-pool”, there is always “ram_pool_dispXXXX” and I’m forced to do a

qvm-pool remove ram_pool_dispXXX

and delete the other traces in the folder /var/ and sub-folders…
Edit: All “guest-dispXXX.log” stay in /var/log/xen/console and all "qmp-proxy-dispXXXX-dm.log not erase in /var/log/xen/ and in /var/log/libvirt/libxl/ :confused:

Edit2: My bad! it’s just me who have make a mistake with my shortcuts! lol ? but…(there is always a but lol) “~/.config/menus/applications-merged/user-qubes-vm-directory-dispXXXX.menu” didn’t erased and sometimes “/run/qubes/audio-control.dispXXX” :slight_smile:
“qvm-pool” is allrigth too :wink:

@qubist

I think it’s always a good thing to have more options :slight_smile:

@Tezeria

What do you mean by “turn off”? Graceful shutdown or pull the power cord?

I think it’s always a good thing to have more options :slight_smile:

Only if they are useful.

1 Like

Hi @qubist

I would like to say “shutdown” :slight_smile:

You know i’m not an expert but i think it something to add here:

# Create void symlinks to prevent log saving
logdir='/var/log'
logfiles=("${logdir}/libvirt/libxl/${qube_name}.log"
          "${logdir}/qubes/guid.${qube_name}.log"
          "${logdir}/qubes/qrexec.${qube_name}.log"
          "${logdir}/qubes/qubesdb.${qube_name}.log"
          "${logdir}/xen/console/guest-${qube_name}.log")

And here:

# Leave no trace on file system
sudo rm -rf "${tempdir}"
for file in "${logfiles[@]}"; do
        sudo rm -rf "${file}" "${file}.old"
done

Right? :slight_smile: (I’m taking advantage of this topic to try to improve! lol)

1 Like

It could improve performance. When you run Qubes OS with low memory you might want to control which qube gets how much memory. i.e. sometimes you might run into memory issues when awake your PC from suspend. There are many guides in this community which uses maxmem 0, maxmem=0. You will find many examples of system qubes and disposable qubes which use a disabled memory balancing. It is a commonly used setting when using minimal templates.

My proposed lines of code allows both limiting the max memory or disable the auto-managed memory share. If you see no need for disabling the memory balancer you might want to allow users to set max memory for disposableRAM VMs (which is the default in my proposal).

PS: Next, I will add a vcpus option which makes also sense to me.

@Tezeria

I will look into that.

1 Like