šŸ›” Qubes OS live mode. dom0 in RAM. Non-persistent Boot. RAM-Wipe. Protection against forensics. Tails mode. Hardening dom0. Root read‑only. Paranoid Security. Ephemeral Encryption

Can I unplug my ssd that live qubes is installed on after I select load to ram option from grub menu?

And can I have another ssd I attach where persistent qubes and data is. If so how do I use them and make sure modifications are written to the ssd instead of trying to have the qube loaded into ram?

These sort of questions can help novice users start using live Qubes OS benefits, thanks if anyone knows.

@RamLovingPenguin Hi! The default script creates only 3 VMs in RAM: dom0 and two DVMs. The other VMs: sys‑vm, various appVMs, and templates - are stored on the SSD. Therefore, you need to copy all VMs into the varlibqubes pool, otherwise you won’t be able to use most of the VMs. But you will need a lot of memory on the device, ideally at least 64 GB.
Be careful, though creating new sys‑usb and sys‑whonix VMs can be tricky for beginners because you have to edit global config settings.

There’s also an important nuance with the overlay: unlike zram‑live mode, overlay does not copy the entire disk. It only directs the currently needed data to the upper writable layer. Disabling the disk will prevent live VMs that you haven’t yet started in overlay from launching. Consequently, zram mode is more convenient for a scenario where you want to disconnect the SSD.
I made a few changes to the default script right now that simplify things when the SSD is disconnected in live mode (umount $NEWROOT/live/cow umount /mnt). It doesn’t affect to the live modes work (it even enhances security). ps I can’t edit the second comment with the other script now, I will add it to the first post later.

If you plan to keep the whole Qubes OS in RAM and work comfortably with that setup, it’s better to install Qubes OS on a Btrfs filesystem. I tested overlay‑live mode on Btrfs 3 months ago and it worked great - Btrfs use varlibqubes pool by default and all my VMs were in RAM (I believe zram mode should also work on Btrfs). But you will need a lot of memory on the device.

You can attach new disks. Just remember that now you have two pools for running VMs:

  • varlibqubes
  • vm‑pool

If a VM resides in the varlibqubes pool - data on the SSD won’t be saved. If it’s in the vm‑pool - data will be persisted.

2 Likes

Thank you!

PCI reset error, unable to strict reset configuration error for sys-net, sys-firewall, and other domU qubes. Thus I cannot start like all qubes except the dom0 and the sys-usb.Tried for both overlay and zram live methods. Do you know what can be the cause. I have 12 gb of ram and task manager shows no bump in ram usage while starting the aforementioned appVMs.

Thank you.

@reseterror Hi! 12 GB RAM is very little for these scenarios. I can only suggest changing 80 RAM value to 95 (or 100) in the script:

    # 80% total_memory
DOM0_MAX_MB=$((system_total_mb * 80 / 100))

:point_down:
DOM0_MAX_MB=$((system_total_mb * 95 / 100))

then remove amnesic‑live‑dvm (keep only whonix‑live‑dvm), and delete old dom0 kernels (do it carefully and make a backup). It will increase the memory usage in live modes up to 95% (100%) and slightly reduce the size of the dom0 live-disk.
You might get lucky and be able to work with 12 GB. Otherwise you will need to buy more RAM (for example, look private sellers in your city instead of buying expensive RAM from a store).

12 GB RAM is a bit low even for the default Qubes boot. You need swap in persistent mode with the default boot, but my script disables swap in all modes. I will fix it tomorrow - swap will be disabled only in live modes. It will be very useful for devices with 12-16 GB of RAM.

1 Like

how can pci reset error have something to do with the amount of ram installed on a system though?

If this problem isn’t present in the default boot, then it’s likely related to insufficient disk space (the memory is being used as a disk for dom0). I don’t think these errors will occur if you increase the device’s memory even up to 16 GB. My two friends haven’t experienced such issues on devices with 16 GB of RAM.
But you need to be careful and monitor the size of dom0 even with 16 GB of RAM on the device. 24 GB is already comfortable.

Very thankful for the thoughtful reply. Seeing now that a fully live Qubes OS will need that much ram and thus an endeavour for a smaller amount of people, my goal is to have one SSD where Qubes OS live is installed and all the default Qubes without desire for removing it anymore and then another SSD with hidden Veracrypt container containing all my personal detailed hidden Qubes with persistence. On shutdown there will just be a standard Qubes OS live and another SSD with a Veracrypt volume with decoy data on it. My concern with this setup though is since the SSD containing the live Qubes OS install will remain plugged in, is there any risk that the use of the other SSD will cause any data about the hidden Qubes on the hidden Veracrypt volume to be exposed to the default SSD the Qubes OS live is installed on? If not this would be the ultimate convince to maximum utility setup for most people using this script.

Ultimately at this point one could even do just one SSD with two partitions. First one containing Qubes OS live and the second partition containing a Veracrypt hidden volume where all the hidden data persistent Qubes lie. Trying to gauge what problems such a setup might create.

@RamLovingPenguin Data leakage is impossible if you use a Veracrypt disk only in live-appVMs (from varlibqubes pool):

  • Your disk isn’t used in the live dom0 and in amnesic‑live VMs - you are working on a layer/copy that resides in RAM.
  • Your root filesystem is mounted read‑only, so nothing can be written to the SSD.

Just avoid using Veracrypt in AppVMs from the VM pool, and everything will work fine.

For paranoid protection, you can run this command in dom0:
qvm-pool set vm-pool -o ephemeral_volatile=True
It adds a bit of anti‑forensic for all of your VMs, reducing the chance of a password leaking to the SSD even if you accidentally mix up an AppVM.

do not use veracrypt hidden volumes with ssds. ssds utilize trim and wear leveling and record writes from logical block address to physical block address via the flash translation layer. a forensic examiner would see many different types of data available of the same block. or he/she may see writes to a certain block more than other blocks in a statistically significant pattern. this defeats the whole purpose of hidden vm/os’es. it will thus be an evidence if not a proof that you have a hidden vm that you want to keep secret. don’t do this if you want plausible deniability. ssds are not to be used with veracrypt hidden volumes.

3 Likes

I agree with you. There’s another method for paranoid storage of hidden AppVMs: create a new live‑AppVM inside the live‑dom0, configure it, then backup this VM to any disk or USB stick. When you need the hidden AppVM, insert the disk and restore it - it takes no more than a minute. The archive on the disk will look like an ordinary archive and can contain any files.

If you want to hide the archive itself, you can use steganography with ZuluCrypt: launch ZuluCrypt in the live‑AppVM, download a large video or book, then create a hidden steganographic volume in ZuluCrypt and move hidden AppVM archive into it.
Decrypt the steganographic volume in ZuluCrypt inside the live‑AppVM, then restore the backup when the hidden AppVM is required.

In the end, no password leaks are possible, and live mode erases all traces of the hidden AppVM’s creation, backup, and steganography.

2 Likes

qubes and hidden vms don’t mix in SSDs. use boot to ram systems. there is no deniability with ssds. also steganograpy is best done with pre-existing encryption. video/music files don’t contain gigabytes of random data but encrypted volumes/containers (not individual files) have it. so the veracrypt naturally favors p. deniability. you guys have poor opsec even if you are intelligent.

also, what version did you use for the live mode? 4.2.3, 4.2.4 or 4.3?

I used it on version 4.2.4 and 4.3

1 Like

The guide has been updated!

  1. Now amnesic‑live‑dvms have a paranoid level of security for anti‑forensics! Kernel parameters init_on_free=1 and init_on_alloc=1 have been added to amnesic‑live‑dvms!
    Now no application inside these VMs can leak your passwords or private data. Xen wipes memory before allocation for VMs and after a VM’s shutdown, these kernel parameters wipe memory before apps start and after app’s shutdown. These settings increase CPU load by 5-15%, but we will use it only in amnesic‑live‑VMs, so it won’t affect Qubes OS performance. So it’s included in the default script.

  2. Now swap is disabled only in live modes! You can use swap in the default Qubes OS boot if your device has only 16 GB of RAM. We no longer affect the default boot of Qubes - all changes happen only in live modes.

And now there’s only one script. I can’t edit the second comment, so the second script is now obsolete and unnecessary. Ultra‑security is now included in the default script.

4 Likes

You are right nand based memory (ssd/nvme) can reveal hidden volume made by veracrypt or shufflecake cause of fstrim but it’s still better than no deniability at all… depending of the adversary and threat model.

for the most paranoid the discontinued intel optane technology fix this problem… (for the one interested by this kind of setup with hidden qube’s take a look… imo best compromise speed / anti forensics)

1 Like

Today I studied ephemeral encryption for DVM and ran many tests. Tomorrow the script will undergo major changes. A new option for anti‑forensic DVM will be added - the most efficient one that doesn’t require much memory. My tests were successful - it worked on all DVMs based on Fedora, Debian, Whonix, and Kicksecure templates.

1 Like

Major script update!

DVMs now feature ephemeral encryption for volatile, private, and root volumes! You are now protected from forensic analysis even before dom0 is rebooted! You can now work in the default vm-pool without needing large amounts of memory - amnestic DVMs can run even on devices with as little as 8-12 GB of RAM!

Qubes OS has long included support for ephemeral encryption for the volatile volume. Later, this mechanism was extended to redirect the root volume into an ephemeral layer:
qvm-volume config VMNAME:root rw False
However, the private volume remained a challenge - arguably the most valuable source of data for forensic analysis, as it serves as the main working volume within appVMs.
I conducted numerous tests to find a reliable method to redirect the private volume into an ephemeral layer without breaking appVM functionality, ensuring that applications could still launch and operate normally. Ultimately, the only stable solution was to move the directories /home/user and /usr/local to /tmp and use bind mounts to operate within isolated copies. This approach isolates /home/user and /usr/local and enables operation entirely in RAM (tmpfs), seamlessly masking any changes

[user@disp3499 ~]$ df /usr/local
Filesystem     1K-blocks   Used Available Use% Mounted on
tmpfs            2097152 351812   1745340  17% /usr/local
[user@disp3499 ~]$ df /home/user
Filesystem     1K-blocks   Used Available Use% Mounted on
tmpfs            2097152 351812   1745340  17% /home/user
[user ~]% mount | grep ' /home/user '
tmpfs on /home/user type tmpfs (rw,nosuid,nodev,size=2097152k,nr_inodes=1048576,inode64)

By using a bind mount, /home/user is redirected into a tmpfs-backed directory, so /home/user and /tmp/home-user are simply two paths to the same RAM-backed data: the file is logically visible in both locations but physically stored only in memory and never written to persistent storage.

As a result, appVMs, DVMs, and DVM templates run /home/user and /usr/local fully in memory and persist no modifications. Therefore, all saved files and settings will disappear after shutting down the appVM or DVM template! If you wish to customize a browser, you must do so before running the script or temporarily comment out /rw/config/rc.local, reboot the DVM template, perform your customization, then uncomment the commands again.
If you need to increase the size of the private storage (to download a large file), you should add more RAM to the appVM / dvm-template in Qube Manager and then modify that line in /rw/config/rc.local:

mount -o remount,size=2G /home/user

However, don’t set it to 100% of VM Max memory.

This approach greatly improves memory efficiency and usability, since the volatile layer operates in the appVM’s own memory rather than in dom0. When dom0 runs in overlay mode, you will still see 100% of its storage space available even on devices with as little as 8 GB of RAM - the overlay keeps only active data in memory, preventing dom0 from freezing regardless of how many amnesic DVMs are launched.

The previously used varlibqubes pool relied on an outdated driver, which lacked support for discard (TRIM), causing appVMs to continuously grow in size and consume excessive memory. Ephemeral encryption for the volatile volume now provides a substantial boost in forensic resistance - ensuring that no private data from a terminated DVM can be accessed even if dom0 remains booted and decrypted.

I will keep working on it. It may not be a perfect scenario yet, but it works great. In other scenarios, the application didn’t start. For an ultra‑paranoid secure, use a DVMs in the varlibqubes pool - I’ve also added ephemeral encryption for this pool.

@deeplow Hi. I can no longer edit the topic title. Since I added ephemeral encryption, could you please add ā€œephemeral encryptionā€ to the topic title?

2 Likes

Done.