@Resende
This version works well, but the DispVMs folder remains in the HOME folder and not in the in a subfolder (which would be more convenient and clearer) . And I don’t dare touch the scripts of the topic anymore! lol.
In any case, good job
EDIT: your version, doesn’t remove ram_pool of DispVMs. The most successful script is that of @qubist
I’ve just rebased the code with qubist’s version, and now it successfully removes DispVMs ram_pool.
The DispVMs folder will be created in the “/tmp/shadow-qube/” directory instead of $HOME.
Additionally, I added the --ephemeral <true,false> option. You can use it in combination with the -k <kernel-ephemeral> option to encrypt content in RAM.
means that wait will be executed only ifqvm-run returns a non-zero exit status, i.e. in case of killing the VM (or other error) - it won’t. Considering that wait returns the termination status of the process it waited for, this is somewhat confusing. Actually, now that I am looking at all this, I question if wait is necessary at all, as it is normally needed if a process is explicitly run in background - something we don’t have here. I am removing it.
Since the goal is to clean up regardless of qvm-run’s exit status, it seems to me more correct to use:
set +e
qvm-run "${qube_name}" "${command_to_run}"
set -e
[...]
I have updated the OP with that, including your addition (with some editing) of your kernel and ephemeral options. One thing I notice though (hence the EXPERIMENTAL note): using ephemeral makes it impossible for me to run programs in the target qube. I don’t know why. I can neither start torbrowser, nor xterm. The qube starts and cleanup works but that’s all.
Could you clarify why that is so? (I have no experience with ephemeral)
Hello thanks you for your hard work but i have some trouble with the script and i think i’m not the only one.
First of all i’m using Qubes 4.2.0-rc4 testing and the script is only working with whonix template.
The script can’t run other commands on other templates for examples i got “command failed with code 127” if i try to launch a terminal disposable. If i try to start a flatpak command with the script like “bash XXX.sh --template X-dvm -c flatpak run org.mozilla.firefox -n mirage-firewall” the script will say command failed with code 1.
Something very strange I’ve noticed ephemeral option is working but not every time. If i add “ephemeral true” Whonix VM will start but that’s all. I don’t see torbrowser opening BUT if i remove ephemeral option Whonix VM will start easy without trouble. But another thing strange is yesterday ephemeral option was working perfectly on Whonix vm… i don’t understand what happen between yesterday and today.
@qubist
DispVMs are created in the $HOME/tmp folder but with the $HOME/tmp/dispXXXX/appvms/dispXXXX/volatile.img architecture instead of $HOME/tmp/dispXXXX/appvms/volatile.img in previous versions. Is that what you wanted to do?
Other than that, everything is ok!
Same thing for me.
i’m using Qubes 4.1.2 and everything is ok. I don’t use flatpak so i can’t test it…
I am using only Qubes 4.1.2. The script was never tested by me on any other version.
[…]
I am getting the same for your flatpak command but I can run other commands on different templates fine. If the command ends with status code different from zero, this means the command itself fails for some reason. The script merely creates a disposable VM and runs a command in it. To my mind, if the command fails, it should fail regardless of whether the VM was created manually or through the script.
[…]
I have no experience with ‘ephemeral’. I added this only because it was suggested by others. As noted by myself, I also face issues with it, hence it is marked EXPERIMENTAL.
[…]
I have not used the scripts of others. They were only used as a starting point (reference). If you find an actual bug with the current script, please describe it (STR, expected, actual).
Local change I forgot to undo before sending it here, sorry about that.
You’re right.
I misunderstood the reason there was a wait in the early version of your script, as it seemed to prevent the removal of remnants without user interaction (Ctrl+C). I thought it would make more sense to wait for user interaction before starting to delete anything in the scenario where qvm-run returns a non-zero exit status, but I don’t see when that’s the case.
One more thing, I’ve noticed that the default private storage size is 2GB. In this case, wouldn’t it be necessary to run qvm-volume extend ${qube_name}:private ${tempsize} after creating the qube?
Once you create a qube with the patched kernel version, you may encounter issues when trying to run commands. Usually this happens because the qubes-mount-dirs service fails while attempting to locate /dev/xvdb for mounting /rw. However, the ephemeral kernel creates /dev/mapper/dmhome for this purpose, so /dev/xvdb is already in use. You can resolve this by cloning the TemplateVM you need to use with the ephemeral kernel and editing the following:
at /usr/lib/qubes/init/mount-dirs.sh line:10
-- if [ -e /dev/xvdb] ; then mount /rw ; fi
++ if [ -e /dev/mapper/dmhome ] ; then mount /rw ; fi
at /usr/lib/qubes/init/setup-rwdev.sh line:9
-- dev=/dev/xvdb
++ dev=/dev/mapper/dmhome
at /usr/lib/qubes/init/setup-rw.sh line:3
-- dev=/dev/xvdb
++ dev=/dev/mapper/dmhome
at /etc/fstab line:5
-- /dev/xvdb /rw auto noauto,defaults,discard,nosuid,nodev 1 2
++ /dev/mapper/dmhome /rw auto noauto,defaults,discard,nosuid,nodev 1 2
Script is now running fine after a reboot. I guess this was a bug from Qubes testing maybe (?) Now i can run other commands on different templates without trouble i’m so happy ! Only ephemeral and flatpak command is still not working.
And thanks u for everthing you do to try to help !
One more thing, I’ve noticed that the default private storage size is 2GB. In this case, wouldn’t it be necessary to run qvm-volume extend ${qube_name}:private ${tempsize} after creating the qube?
Looking at disposables created by the script, it seems unnecessary:
user@dom0:~ > qvm-volume info disp2479:private | grep size
size 2147483648
Regarding ephemeral:
The man page of qvm-volume says:
· ephemeral - should the volume be encrypted with en ephemeral key? This
can be enabled only on a volume with save_on_stop=False and
snap_on_start=True - which is only volatile volume. When set, it pro‐
vides a bit more anti-forensics protection against attacker with access
to the LUKS disk key. In majority of use cases, it only degrades per‐
formance due to additional encryption level.
Why doesn’t this work? I don’t know. Without it ephemeral seems inapplicable.
Also:
What exactly is “a bit more”? If someone has the LUKS key, how exactly will this help anything? Without detailed explanation it sounds very unclear.
Once you create a qube with the patched kernel version, you may encounter issues when trying to run commands.
I have not done that patching. It seems too risky to make this script depend on another set of scripts, especially as they touch dom0 and the benefits are questionable.
Considering all that, it seems better to remove the ephemeral option for now. We can always add it when there is clarity and it works predictably, without the need of patches.
@whoami
I’ve noticed that indeed, everything works perfectly when I close the DispVM applications as well as the Dispms directly. On the other hand, if I turn off my laptop WITHOUT closing the DispVMs, then I still have traces, by making a “qvm-pool”, there is always “ram_pool_dispXXXX” and I’m forced to do a
qvm-pool remove ram_pool_dispXXX
and delete the other traces in the folder /var/ and sub-folders…
Edit: All “guest-dispXXX.log” stay in /var/log/xen/console and all "qmp-proxy-dispXXXX-dm.log not erase in /var/log/xen/ and in /var/log/libvirt/libxl/ …
Edit2: My bad! it’s just me who have make a mistake with my shortcuts! lol ? but…(there is always a but lol) “~/.config/menus/applications-merged/user-qubes-vm-directory-dispXXXX.menu” didn’t erased and sometimes “/run/qubes/audio-control.dispXXX”
“qvm-pool” is allrigth too
It could improve performance. When you run Qubes OS with low memory you might want to control which qube gets how much memory. i.e. sometimes you might run into memory issues when awake your PC from suspend. There are many guides in this community which uses maxmem 0, maxmem=0. You will find many examples of system qubes and disposable qubes which use a disabled memory balancing. It is a commonly used setting when using minimal templates.
My proposed lines of code allows both limiting the max memory or disable the auto-managed memory share. If you see no need for disabling the memory balancer you might want to allow users to set max memory for disposableRAM VMs (which is the default in my proposal).
PS: Next, I will add a vcpus option which makes also sense to me.