[Guide] Custom Install Disk Partitioning LVM Layout in Qubes 4.2

Continuing the discussion from Help: Custom Installation (LVM Layout & Config) on 4.1:

After doing this setup multiple times now, I have written a far more detailed guide on how to manually setup your entire disk from scratch, including manual selection of encryption cyphers & iterations.

This is useful if you are building your system in a way that is different to standard Qubes Install, for example in a Dual Boot setup or just leave some space free at the end for future use, since by default Qubes will consume the entire disk you install to.

# Manual Disk Partitioning for Custom Installation of Qubes OS 4.2
# The following commands can either be run from the Qubes OS GUI Installer, the rescue mode, or a 3rd party boot tool/os that runs from memory.
# Step 0 assumes it's being run from the Qubes installer.

# Step 0: Switch to the terminal from the installer
# Press CTRL + ALT + F2 to access the terminal

# Step 1: View current partitions
lsblk                                                 # List all block devices

# Step 2: Reinitialize the drive as GPT to wipe all partitions
gdisk /dev/nvme0n1  # Open gdisk on the target drive
# Press 'o' to create a new empty GPT partition table
# Press 'n' to create a new partition
# 1. For Partition 1:
#    - Enter: '1' (Partition Number)
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+922M' (900MB size)    # making this a little larger than the default 600M since it holds 3 kernels
#    - Hex code: 'EF00 (EFI System)' - use 0700 for "basic data" (blank partition, Qubes will format it later)
# Press 'n' to create another partition
# 2. For Partition 2:
#    - Enter: '2' (Partition Number)
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+1024M' (1GB size)
#    - Hex code: '8300 (Linux Filesystem for /boot)'
# Press 'n' to create another partition
# 3. For Partition 3:
#    - Enter: '3' (Partition Number)
#    - First sector: Press 'Enter' (default)
#    - Last sector:  Press 'Enter' (use remaining space)
#    - Hex code: '8E00 (Linux LVM)'
# Press 'w' to write changes and exit

# Step 3 PREP:
cryptsetup benchmark                                  # If performance is important, run this to determine the fastest
                                                      # encryption protocol specific to your system and change
                                                      # the below -c command to match it

# Step 3: Set up LUKS encryption with Argon2id and 10,000 iterations
cryptsetup luksFormat -c aes-xts-plain64 -s 512 --pbkdf argon2id -i 5000 -y --use-random /dev/nvme0n1p3
# This uses AES-XTS-512, Argon2id KDF with 5,000 iterations, and entropy (from --use-random)
# The # of iterations (defined by -i 5000) can also be set in "time" instead using: '--iter-time 5000' (5 seconds)

# Step 4: Open encrypted partition
cryptsetup open /dev/nvme0n1p3 luks                   # Open encrypted partition as 'luks'

# Step 5: Set up LVM on encrypted partition
pvcreate /dev/mapper/luks                             # Create physical volume
vgcreate qubes_dom0 /dev/mapper/luks                  # Create volume group

# Step 6: Create logical volumes
lvcreate -n swap -L 4096M qubes_dom0                  # Create 4096MB (4GB) swap volume

# Step 7: Create thin pools
lvcreate -T -L 40960M qubes_dom0/root-pool            # Create 40960MB (40GB) thin pool for root
lvcreate -T -l 99%FREE qubes_dom0/vm-pool             # Create the 'vm-pool' thin pool with 99% of the available free space
#  ^ 99% to leave a useful amount of free space for possible repair operations in future, feel free to use 100% instead

# Step 8: Check the size of the new vm-pool
lvs -o +lv_size --units m                             # Get the exact size of vm-pool (needed for optional step 9b)

# Step 9: Create logical volumes from thin pools
lvcreate -V40960M -T qubes_dom0/root-pool -n root     # Create 40960MB (40GB) root logical volume
# Step 9b: Create vm-pool and give it alias 'vm'
#lvcreate -V<exact_size>M -T qubes_dom0/vm-pool -n vm  # Replace <exact_size> with the size obtained from 'lvs'
#  ^ this line may not be necessary, have left it here for reference in case it is

# Step 10: Format the 'root' and 'vm' volumes
mkfs.ext4 /dev/qubes_dom0/root                        # Format 'root' logical volume as EXT4
#mkfs.ext4 /dev/qubes_dom0/vm                          # Format 'vm' logical volume as EXT4
#  ^ above line from old instructions - this is a 'thin_pool' managed by Qubes as such is not intended to be formatted at all

# Step 11: Prepare the swap volume
mkswap /dev/qubes_dom0/swap                           # Mark the swap volume as swap space

# Step 12: Deactivate all logical volumes
vgchange -an qubes_dom0                               # Deactivate all logical volumes in the volume group

# Step 13: Close the LUKS device
cryptsetup close luks                                 # Close the LUKS device

# Step 14: Return to installer
# Press CTRL + ALT + F6 to return to the Qubes OS installer GUI
# Recommended to reboot system before resuming installation
9 Likes

Great !
I’m still struggling with ā€œthe perfectā€ partitionning, pondering between BTRFS, EXT4, LVM2, …
Question: You mention 922M = 900MB, but I can’t find what that first ā€œMā€ refers to as none (that I found) equals to 900 MB

Metadata, Overprovisioning and Overlapping:

  • The VG can be set to up to 100% of the PV (in my case => 720Go)
  • The LV should be no larger than 99% of the VG (in my case => 79Go + 635Go)
  • The LV-T should not be larger than 40% to 80% of the LV (In my case => 254Go to 507Go; average 381Go)
    • I don’t plan to add more to it so I’ll use 80% (64Go + 508Go)
  • The V-LV should not overprovision more than 130% of the LV-T (in my case =>
    – 64x130%=84Go, limited to PV/LV → 79Go
    – 508x130%=661Go, limited to PV/LV ->635Go

The rule is about 1Go of metadata for 100,000 V-LV; and in my setup I have about 30 to 50 V-LV; let’s say max 80 V-LV, so max 0.7632MiB (0.800MO) therefore 782KiB (800KO) ==> Some 80MiB (85Mo) will be plenty enough to cover +~ 7850 VM

=> lvcreate -L 400G -n my-thinpool --thinpoolmetadata 80M myvg --type thin-pool

I’m glad someone is taking this on.

I wanted to give HALF of my SSD to Qubes once, and leave the other half empty. The customizable setup wanted me to individually specify all of the internal partitions, and I didn’t want to change any of them. But unfortunately I had no idea what the normal settings were.

I ended up having to partition the SSD in two halves by a different method (gparted on another system); then it was easy enough to just let Qubes have the first of the two partitions.

1 Like

I ended up doing it all using the GUI.
Select all three NVMe
Do Not use the ā€œadvanced custom (Blivet-GUI)ā€ but the ā€œcustomā€
On NVMe0n1: Created /boot/efi + /boot + Thin LVM dom2 64GB (size=automatic) swap + Thin LVM dom2 24GB /tmp + Thin LVM dom2 24GB /var/tmp + Thin LVM dom2 80GB /var/log
Then I can’t create any other part on any of the other disks …

So I started over:
On NVMe0n1: Created /boot/efi + /boot
On NVMe1n1: Created Thin LVM dom0 80GB /; selected RAID1; size=auto (dom0-root)
On NVMe1n1: Created Thin LVM dom1 424GB /var/lib/qubes; selected RAID1; size=auto (dom1-var_lib_qubes)
On NVMe0n1: Created Thin LVM dom2 64GB swap + Thin LVM dom2 24GB /tmp + Thin LVM dom2 24GB /var/tmp + Thin LVM dom2 80GB /var/log size=auto (dom2-var_lib_log)

Worked to the second screen !
Selected options as usual (default Debian, sys-net disposable, whonix updates) Started process then an error pops-up:

Dom0 error (on dom0)
['usr/sbin/lvcreate','-l','90%FREE','--thinpool','pool00','qubes_dom0'] failed:
stdout;" "
stderr:" Logical volume pool00 already exist in Volume group qubes_root. "

EDIT 24-Aug.
Gave the GUI another try, still not working (Once install started, it gives ma an error message pop-window with 200+ lines)

  • I select the standard language
  • I choose my KB mapping
  • I create a user + password
  • I select the three NVMe and go to either (tried both) manual or Blivet
  • I create ESP, Boot, PV+VG+LV for all Sys-Run (Swap, tmp, log, … )
  • I create a RAID, I create a PV, VG, LV, LV-T
  • Only root receive an FS + mounted / (vm-pool-vm is left as-is)
    Then start install
    After about 1min, it crashes

The 922 is simply 900 x 1024 = 921,600 (rounded up to 922)

1 Like

TY, yeah, I figure it out later … I first started with a completely outstreched search, with GiB vs GB, MB vs MiB, … way too far, the answer was right in front :roll_eyes:

While at it, if you have time, please review and critique my post above ?

This can definitely help.

I also have a github issue open on the topic of how the instructions for manual partitioning are not that clear (Manual partitioning instructions are unclear, leading to `varlibqubes` and `vm-pool` being allocated in unexpected ways Ā· Issue #9886 Ā· QubesOS/qubes-issues Ā· GitHub)

The problem is that the manual partitioning works in a counter-intuitive way and there aren’t any hints in that area to explain what’s happening. So if you use the automatic tools for the boot section, and have another partition you aren’t using, and then add the main Qubes partition, Qubes will list it as only taking up part of the space, like it’s not going to take up that whole partition. But that section becomes varilibqubes and the remaining space that isn’t listed becomes vm-pool during the second part of the installation during the reboot. The ambiguity has probably caused a lot of frustration and a few simple words in that area could make it easier. That’s my view on it.

If someone wants to add to that issue with suggested wording for ā€œhelpā€ that can be put into the manual partitioning GUI, it would be cool because my suggestion is sort of vague. I think the manual partition GUI is possibly much easier if there were better instructions.

I’m sorry I’m not really able to give you a lot of feedback here, despite writing this guide it did take me a while to come up with the initial and it’s based on numerous revisions of previous works from earlier 4.0 & 4.1 Qubes as you can see from my previously linked post.

Essentially what I did was do a full default install of Qubes 4.2, then went back out to a command line, loaded up the whole LVM layout and figured out all the commands to duplicated it exactly.

Then went and customized it a bit to shink some of the partitions. Sometimes I like to stick a ā€œbackupā€ OS at the end of the drive or use it as some sort of extra storage space that Qubes doesn’t regularly touch, like a backup partition in case Qubes ever needs to be formatted I can just toss all the backup files in there.

I hoenstly know next to nothing about RAID setups, so can’t really be much help with the setup you’re trying to create. I’m not an LVM or disk layout master, just someone who spent enough time hacking out a proper working setup for my main use case and decide to share with others since I couldn’t find a good guide on this when I needed to figure it out myself.

With that all said, might I suggest you try pasting your entire script/setup there with the RAID into a couple of AI platforms (I’d recommend Claude & Gemini Pro) - you can use both for free and get feedback on your setup and compare the responses of each AI. They’re often pretty good at finding technical errors in things like this and have both gotten MUCH better at it over the past year.

Claude Sonnet 4 & Google Gemini 2.5 Pro (which the latter can be used at google’s ā€œaistudioā€ extensively for free)

Def come back and let us know what your successful setup ends up being, I’m sure it’ll be useful to someone looking for something similar one day.

1 Like

TY very much !
I did the same thing, I installed ā€œfull autoā€ on the little NVMe, then went back to it in a terminal and ā€˜cmd X’ Print; ā€˜cmd Y’ P, and took pictures of the layout

That’s how I realized some just can’t be done with the GUI, especially the LVM Thin inside the LVM; while the GV (in ā€œclassicā€, not in ā€œBlivietā€) and also the first layer LVM can be created, but the install always freezes.
An input from @51lieal would be very helpful

I’ve spent days (at work, haha, not so busy during shutdown, so …) tweaking and correcting my version of your script, and hopefully that will be the right one, and hopefully that will help others making yet another fork of it.
I’ll post it to give an idea, but it’s still WIP

PS: Just doing this I fill almost like I’m not an eternal newbie ! haha

!! Version 6: The partitioning is all good, but I still can’t install as the return to the GUI freezes
I’m open to keep going in TUI but:
I’ve tried /etc/sbin/install.py, failed.
I’ve tried /etc/sbin/anaconda, failed
I’ve tried chmod, failed
Here is my post for Full TUI install

# Manual Disk Partitioning for Custom Installation of Qubes OS 4.2.x on RAID.

## Introduction:
// This guide is for manual partitioning strategy on a system with three NVMe drives (two in RAID), 128GB of RAM, and an NVIDIA A2000 GPU.
// The Qubes OS GUI installer (Anaconda) can be picky for complex LVM configurations. To ensure a robust install and avoid conflicts, this guide uses the **QOS Rescue Mode**, with all cmds executed from the **Text-based User Interface (TUI)**.

## Objectives
- Maximise RAM available for Xen and VMs
- Separate systeme ressources (swap, temp, log, cache) heavy R/W on a smaller (cheaper) disk
- Prepare infrastructure for ZFS (snapshots, compression)
- Robustness and redundancy with RAID1
- Full use of nvidia A2000

REM # STEPS.X: 1) Blocks => 2) RAID => 3) LUKS => 4) LVM => 5) LVM-T => 6) FS => 7) Text-Install

## STEP.0: Initial Setup and Device Discovery
// Upon booting the Qubes OS installer, select **"Rescue mode"** (or installer + ctrl-alt-f2) to enter the TUI.
// Check keyboard layout (Crucial for later, i.e: your password):
=> localectl status
# If it's not the one you want, try:
=> loadkeys <code>				# code like: es_ES; es_MX; es_AR; ca_ES; ca_FR; fr_BE; fr_CA; fr_CH; fr_LU
# Warning ! Even if present in /usr/share/locale/* The TUI might not be ble to load it, so mapping will be default 'en_US'
// View current partitions
=> lsblk						# List all block devices
// Note your device names (e.g., `/dev/nvme0n1`, `/dev/nvme1n1`, `/dev/nvme2n1`).

## STEP.1: Partitionning using gDisk, starting with reinitializing the GPT table
// OPTIONAL: CLEAN disk to avoid conflicts
// If you've had previous installations that left behind metadata (LVM, RAID, etc.),
=> sgdisk -Z (--zap-all) /dev/NVMe[1,2]n1	# sgdisk -z (--zap)) /dev/NVMe[1,2]n1) is equiv. to gDisk 'o'
# If that doesn't work (like me) you might need to use wipeFS, or a live (HBCD, MediCat, Kali, Debian, etc..) to write whatever
=> wipefs -a -f /dev/nvme[0,1,2]n1	# -a for "all" and -f for "force"

// IMPORTANT: gDisk uses MiB Mebibytes (bi for binary) for its default M, not GB Gigabit/GigaOctets (Decimal)
// For example, 1GO/GB = 953.6743164MiB; 0.93132257462 GiB; 1 GiB = 1024 MiB
// Simple formula for roounding: =if((MiB-ROUNDDOWN(MiB))>0.3,ceiling(MiB,2),mround(MiB,1))

# STEP.1A: NVMe0n1, 256GB (238.41858GiB)
=> gdisk /dev/nvme0n1			# Open gdisk on the target drive
=> o							# to create a new empty GPT partition table
=> n							# to create a new partition  (NVMe0n1p1)
=> 1 (or Enter)					# to select Partition 1:
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+954MiB'	# 0.93132GiB = ~1GB/GO, A little larger than default 600MiB since 3+ kernels
#    - Hex code: 'EF00' (EFI System)
=> n							# to create another partition  (NVMe0n1p2)
=> 2 (or Enter)					# to select Partition 2:
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+1908M'	# 1.8626GiB = ~2GB
#    - Hex code: '8300' (Linux Filesystem for /boot)
=> n							# to create another partition  (NVMe0n1p3)
=> 3 (or Enter) 				# to select Partition 3:
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+141144M'	# 137.835741GiB = ~148 GB (56+24+16+40+8+4) Press 'Enter'
#    - Hex code: '8E00' (Linux LVM)
=> n							# to create another partition  (NVMe0n1p4)
=> 4 (or Enter) 				# to select Partition 4:
#  - First sector: Press 'Enter' (default)
#  - Last sector:  Press 'Enter' (use all remaining)
#  - Hex code: 'bf00' (Solaris/ZFS)' # (reserved for ZFS one day)
=> w							# to write changes and exit

# STEP.1B: NVMe1n1 & NVMe2n1, 2000GO / 2TB (1862.64515GiB)
=> gdisk /dev/nvme1n1			# Open gdisk on the target drive
=> o							# to create a new empty GPT partition table
=> n							# to create a new partition  (NVMe1n1p1)
// NOTE: If you want (80GO) qubes-root and (640GO) qubes-vm on each its own MD (i.e: MD101; MD102) then these will have to be 2 separate.
=> 1 (or Enter) 				# to select Partition 1:
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+686646M'	# 670.55225GiB = ~720GB (80+640)
#    - Hex code: 'fd00' (Linux RAID)
=> n							# to create a new partition  (NVMe1n1p2)
=> 2 (or Enter) 				# to select Partition 2:
#  - First sector: Press 'Enter' (default)
#  - Last sector:  Press 'Enter' (use all remaining)
#  - Hex code: 'bf00 (Solaris/ZFS)' #(reserved for ZFS one day)
=> w							# to write changes and exit
// do the exact same (Mirror) on the second disk:
=> gdisk /dev/nvme2n1			# Open gdisk on the target drive
=> o							# to create a new empty GPT partition table
=> n							# to create a new partition  (NVMe2n1p1)
=> 1 (or Enter) 				# to select Partition 1:
#    - First sector: Press 'Enter' (default)
#    - Last sector:  '+686646M'	# 670.55225GiB = 720GB (80+640)
#    - Hex code: 'fd00' (Linux RAID)
=> n							# to create a new partition  (NVMe2n1p2)
=> 2 (or Enter) 				# to select Partition 2:
#  - First sector: Press 'Enter' (default)
#  - Last sector:  Press 'Enter' (use all remaining)
#  - Hex code: 'bf00 (Solaris/ZFS)' #(reserved for ZFS one day)
=> w							# to write changes and exit
=> lsblk						# To print the new partition table with names 

// NOTE: LVM over RAID recommended (mdadm RAID) as simpler and more flexible; but LVM2 RAID can be used if specific needs => not covered here 
## STEP.2: Create RAID1 (mirroring) array:
## STEP.2A: RAID1 to be used as the base for our LVM dom0
=> mdadm --create /dev/md10 --level=1 --raid-devices=2 /dev/nvme1n1p1 /dev/nvme2n1p1
# mdadm: defaulting to version 1.2 metadata
# mdadm: array /dev/md10 started
// Confirm the RAID array is active and includes the correct devices.
=> lsblk /dev/md10				# To check/confirm details
// NOTE: If you chose to have "each its own" partitions in step 1B, the step.2 have to be done again for the second partition.
## STEP.2B: If you created a second partition for a future ZFS setup, create a second RAID array for them.
=> mdadm --create /dev/md20 --level=1 --raid-devices=2 /dev/nvme1n1p2 /dev/nvme2n1p2
# mdadm: defaulting to version 1.2 metadata
# mdadm: array /dev/md20 started
// Confirm the RAID array is active and includes the correct devices.
=> lsblk /dev/md10*	(or md20*)	# Same with more details
//	OPTIONAL: If you choose to have md (RAID) partitions inside a single block partition, the following operation is for you, ie: md10p1, md10p2 => NOT covered here.
//	OPTIONAL: Create partitions on RAID (for dom0)
//	OPTIONAL: => parted -s -a optimal -- /dev/md10 mklabel gpt
//	OPTIONAL: First partition (for root)
//	OPTIONAL: => parted -s -a optimal -- /dev/md10 mkpart primary   0% 11%	# That makes /dev/md10p1
//	OPTIONAL: Second partition (for VMs)
//	OPTIONAL: => parted -s -a optimal -- /dev/md10 mkpart primary 11% 100%	# That makes /dev/md10p2
//	OPTIONAL: => parted -s -- /dev/md10 align-check optimal 1
//	OPTIONAL: Create partitions on RAID (for ZFS for later)
//	OPTIONAL: => parted -s -a optimal -- /dev/md20 mklabel gpt
//	OPTIONAL: => parted -s -a optimal -- /dev/md20 mkpart primary   0% 100%
## STEP.2C: Saving the RAID config
// CRITICAL step. It saves the RAID config so the array is auto reassembled on reboot, preventing boot failures.
=> mdadm --detail --scan >> /etc/mdadm.conf	# You might need to adapt the path, i.e: external USB

REM: NOTE: LVM on LUKS recommended // LUKS on LVM is less common and for specific uses (i.e: diff. keys per volumes) => not covered here 
# OPTIONAL: cmd to assess the fastest encryption protocol specific to your system and change;
=> cryptsetup benchmark	(-c <cipher>)
# # We use strong, modern encryption parameters for robustness and performance.
# The iterations <count> (-i 8888) means that much iter regardless of time (depending on CPU),
# while modern "time" <ms> (--iter-time 8888) means 8,8sec. regardless of iterration count (Independent of CPU)
## STEP.3: Crypto Setup LUKS ('luks_boot'*, luks_root, luks_srun) based on 8 core & 128GB RAM:
## STEP.3A: Crypto Setup LUKS 'LUKS_ROOT'
=> cryptsetup luksFormat --cipher aes-xts-plain64 --key-size 512 --hash sha3-512 --pbkdf argon2id --iter-time 4444 -y /dev/md10
=> cryptsetup open /dev/md10 LUKS_ROOT			# Open encrypted partition as 'luks_root'
# OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_ROOT status=progress	// This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
=> cryptsetup luksDump /dev/md10 # To verify LUKS header info
=> cryptsetup status LUKS_ROOT	# To get the status of the newly created LUKS
## STEP.3B: Crypto Setup LUKS 'LUKS_SRUN'		# If you want to have SysRun (Swap, temp, log, cache, ) encrypted
# We use smaller key sizes here since this partition is not a primary target for data compromise.
=> cryptsetup luksFormat -c aes-xts-plain64 -s 256 --hash sha256 --pbkdf argon2id -i 1500 -y /dev/nvme0n1p3
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN		# Open encrypted partition as 'luks_sys-run'
# OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_SRUN status=progress	// This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
// ** STEP.3C: Crypto Setup LUKS 'luks_Boot' (NOT recommanded, QUBES doesn't like it much, leading to many potential difficulties (brick !))
// OPTIONAL: If you want to have Boot (only luks1) encrypted !!  not covered here, at your own risk ;)
// => => cryptsetup luksFormat --type luks1 -c aes-xts-plain64 --key-size 256 --pbkdf pbkdf2 --i 1500 -y /dev/nvme0n1p2
// => cryptsetup open /dev/nvme0n1p2 LUKS_BOOT		# Open encrypted partition as 'luks_boot'
// OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_BOOT status=progress	// This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
REM: NOTE: Strongly suggest to do Header backup after first boot into dom0; see STEP.X

## STEP.4: LVM; hierarchy: PV => VG => LV 
# STEP.4A, PV create (physical volume)
// if you have encrypted boot: => pvcreate /dev/mapper/LUKS_BOOT
// These LUKS are based on RAID md10 and is seen as a block device. Hence using /dev/mapper/LUKS_XXXX instead of a partition name.
=> pvcreate /dev/mapper/LUKS_ROOT				# for Root+VM (/var/lib/qubes)
=> pvcreate /dev/mapper/LUKS_SRUN				# for System-Run (swap, temp, log, cache, )
// Verify that the PVs have been created successfully.
=> pvs
# STEP.4B, VG create (volume group)
// if you have encrypted boot: => vgcreate qubes_boot /dev/mapper/LUKS_BOOT
// We create a VG for dom0 (Qubes OS core) and one for dom1 (I/O-heavy system volumes).
=> vgcreate qubes_dom0  /dev/mapper/LUKS_ROOT
=> vgcreate qubes_dom1 /dev/mapper/LUKS_SRUN
// Verify the new VGs.
=> vgs
# STEP.4C, LV create (logical volume) "Thick"
// if encrypted => lvcreate -n boot	-L 2048M  qubes_boot	# Create 2GB boot volume in VG boot
// These volumes for system directories will be on the 'qubes_dom1' VG. They are "thick" because their full size is allocated upfront.
=> lvcreate -n swap		-L 53406M qubes_dom1	# Create 52.1528GiB (56GB) swap volume in VG dom1
=> lvcreate -n tmp		-L 22888M qubes_dom1	# Create 22.3512GiB (24GB) temp1 volume in VG dom1
=> lvcreate -n var-tmp	-L 15260M qubes_dom1	# Create 14.90116GiB (16GB) temp2 volume in VG dom1
=> lvcreate -n var-log	-L 38148M qubes_dom1	# Create 37.2529GiB  (40GB) log volume in VG dom1
=> lvcreate -n var-cache -L 7630M qubes_dom1	# Create  7.45058GiB ( 8GB) cache volume in VG dom1
=> lvcreate -n var-lib-xen -l 100%FREE qubes_dom1 # Create  3.xxxGiB ( 4GB) xen volume in VG dom1
// Verify that all "thick" volumes were created.
=> lvs qubes_dom1
# STEP.4D, LV create (logical volume) "Thin Pool"
=> lvcreate -T -L 76294M qubes_dom0/root-pool	# Create 74.5058GiB (~80GB) Thin-pool for root in VG dom0
=> lvcreate -T -l 100%FREE qubes_dom0/vm-pool	# Create 596.04375GiB (~640GB) Thin-pool for VM in dom0
// Verify the creation of the thin pools.
=> lvs qubes_dom0
## STEP.4.E: Create "Thin" Logical Volumes from the Pools
// First, find the actual size of the .root-pool. and 'vm-pool'
=> lvs -o +lv_size --units m					# Get the exact size of pool (needed for next step)
// Then create the 'root' TLV (Thin Logical Volume) from pool
// As the Thin LVM usese space for metadata, the size will slightly differ.
=> lvcreate -V <size from 4E.1>M -T qubes_dom0/root-pool -n root	# Create ~79GB root logical volume
=> lvcreate -V <size from 4E.1>M -T qubes_dom0/vm-pool -n vm		# Create ~639GB vm logical volume (exact size in '4E')
// Final check of all LVM volumes.
=> lvs

## STEP.5: Making a FileSystem for/in each volume
// We use '-m 0' to force no reservation of space for superuser
// We use '-E lazy_itable_init=0' to initiate inodes right away
// we add 'lazy_journal_init=0' for EXT4 to initiate a journal right away
=> mkfs.fat -F 32		/dev/nvme0n1p1	# for ESP
=> mkswap /dev/qubes_dom1/swap
# Format Sys-Run volumes with ext2 to minimize write operations and extend NVMe lifespan.
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/tmp	# EXT2 to minimise write operations
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/var-tmp	# EXT2 to minimise write operations
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/var-log	# EXT2 to minimise write operations
# Format other volumes with ext4 for journaling and better data integrity.
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0	/dev/nvme0n1p2	# for /boot
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom1/var-cache
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom1/var-lib-xen
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom0/root
// => mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom0/vm
#  ^ above line is a 'thin_pool' managed by Qubes, as such is not intended to be formatted at all

REM # STEP.6: Mount volumes and prepare for installation
## STEP.6A: Create the top-level mount point.
=> mkdir -p /mnt/qubes
# Mount the 'root' volume. This becomes the foundation of your new system.
=> mount /dev/qubes_dom0/root /mnt/qubes
## STEP.6B: NOW, create the complete directory hierarchy on your new filesystem.
=> mkdir /mnt/qubes/boot					# FS exists in NVMe0n1p2
=> mkdir /mnt/qubes/boot/efi				# FS exists in NVMe0n1p1
=> mkdir /mnt/qubes/tmp						# FS exists in dom1
=> mkdir -p /mnt/qubes/var/tmp				# FS exists in dom1
=> mkdir /mnt/qubes/var/log					# FS exists in dom1
=> mkdir /mnt/qubes/var/cache				# FS exists in dom1
=> mkdir -p /mnt/qubes/var/lib/xen			# FS exists in dom1
// => mkdir /mnt/qubes/var/lib/qubes		# FS does NOT exists in dom0
// ^ above line will be managed by Qubes, as such is not intended to be mounted at all

## STEP.6C: Now mount the volumes into their correct locations (qubes_dom1 = all the I/O intensive stuff)
=> mount /dev/nvme0n1p2 /mnt/qubes/boot
=> mount /dev/nvme0n1p1 /mnt/qubes/boot/efi
=> mount /dev/qubes_dom1/tmp /mnt/qubes/tmp
=> mount /dev/qubes_dom1/var-tmp /mnt/qubes/var/tmp
=> mount /dev/qubes_dom1/var-log /mnt/qubes/var/log
=> mount /dev/qubes_dom1/var-cache /mnt/qubes/var/cache
=> mount /dev/qubes_dom1/var-lib-xen /mnt/qubes/var/lib/xen
// Mount VM storage (thin pool - this is where all VMs will be stored)
// This volume will not be mounted, as it doesn't have FS and will be managed by Qubes installer GUI
// => mount /dev/qubes_dom0/vm /mnt/qubes/root/var/lib/qubes
# STEP.6D: Activate swap
swapon /dev/qubes_dom1/swap
# STEP.6E: Verify everything is mounted correctly
=== Mount verification ===
=> df -h | grep /mnt/qubes
=== Swap verification ===
=> cat /proc/swaps
=== Block devices overview ===
=> lsblk
# STEP.6F: Update system configuration files for persistence
=== Updating RAID configuration ===
=> mdadm --detail --scan | tee -a /etc/mdadm.conf
=== LVM volume groups status ===
=> vgchange -ay qubes_dom0
=> vgchange -ay qubes_dom1
=> lvs; vgs

REM # This is where you should go back (ctrl-alt-F6) to the GUI installer to finalize install
# But as it doesn't work (at least for me, it freezes and crash) we will keep going in TUI
## STEP.7: Prepare for actuall installGUI handover (NO REBOOT!)
# Keep LUKS containers OPEN for GUI detection
# DON'T close LUKS_ROOT and LUKS_SRUN !
# Return to GUI installer (Ctrl+Alt+F6)
# GUI should now detect your LVM thin volumes!

## STEP.8: Emergency Reboot Recovery (if you MUST reboot before GUI handover)
## STEP.8A: The short (dirty) way: 
=> Reboot
=> mdadm --assemble --scan
=> cryptsetup open /dev/md10 LUKS_ROOT
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN
# Re-do 6B 6C 6D

## STEP.8B The long and clean way:
## STEP.8B1: Deactivate swap
=> swapoff /dev/qubes_dom1/swap
## STEP.8B2: Unmount all filesystems in reverse order (deepest first)
=> umount /mnt/qubes/root/var/lib/qubes
=> umount /mnt/qubes/root/var/lib/xen  
=> umount /mnt/qubes/root/var/cache
=> umount /mnt/qubes/root/var/log
=> umount /mnt/qubes/root/var/tmp
=> umount /mnt/qubes/root/tmp
=> umount /mnt/qubes/root
## STEP.8B3: Close LUKS containers
=> cryptsetup close LUKS_ROOT
=> cryptsetup close LUKS_SRUN
## STEP.8B4: Stop RAID arrays (optional, but safer)
=> mdadm --stop /dev/md10
# mdadm --stop /dev/md20  # if you created md20
## STEP.8B5: Now safe to reboot
=> sync
=> reboot		# or shutdown now and restart
## STEP.8C: AFTER REBOOT - Recovery procedure 
# Boot again into installer/rescue mode, then Ctrl+Alt+F2
## STEP.8C1: Reassemble RAID arrays
=> mdadm --assemble --scan
# OR manually if --scan doesn't work:
=> mdadm --assemble /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
## STEP.8C2: Verify RAID status
=> cat /proc/mdstat
=> lsblk		# To check with a global picture
## STEP.8C3: Reopen LUKS containers (you'll need to enter passphrases again)
=> cryptsetup open /dev/md10 LUKS_ROOT
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN
## STEP.8C4: Activate LVM volume groups
=> vgchange -ay qubes_dom0
=> vgchange -ay qubes_dom1
## STEP.8C5: Verify LVM status
=> lvs
=> vgs
## STEP.8C6: Recreate mount points
Re-do STEP.6B 6C 6D
## STEP.8C7: Final verification
=== Recovery Status ===
=> cat /proc/mdstat
=> df -h | grep /mnt/qubes
=> cat /proc/swaps
=> lvs; vgs
## STEP.8C8: Update configurations again
=> mdadm --detail --scan | tee -a /etc/mdadm.conf
// RECOVERY COMPLETE; Your setup has been fully restored after reboot
# Return to GUI installer with: Ctrl+Alt+F6

REM # TROUBLESHOOTING NOTES:
# If mdadm --assemble --scan fails:
#   - Try: mdadm --assemble /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
#   - Check: cat /proc/mdstat
#   - Force if needed: mdadm --assemble --force /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
#
# If LUKS containers won't open:	# that goes for md20 <LUKS_SRUN> too
#   - Verify: cryptsetup isLuks /dev/md10
#   - Check: cryptsetup luksDump /dev/md10
#   - Retry with: cryptsetup open --type luks2 /dev/md10 LUKS_ROOT
#
# If LVM won't activate:			# That goes for qubes_dom1 too
#   - Scan: pvscan
#   - Activate: vgchange -ay --select vg_name=qubes_dom0
#   - Check: vgdisplay qubes_dom0

## STEP.9: Post-Install Security (and adjust brightness)
=> xrandr --output eDV-1 --brightness 0.35 --gamma 1.15:0.75:0.65	# Gamma R-G-B depending on time of day
# NOTE: It might be a good idea to do Header backup now
=> cryptsetup luksHeaderBackup /dev/md10 --header-backup-file /root/luks-header-md10.backup
=> cryptsetup luksHeaderBackup /dev/nvme0n1p3 --header-backup-file /root/luks-header-srun.backup

## Extras for my P15, executed in dom0, NOT RECOMMANDED, unless you know, you know ;). FYIO, DYOR.
## STEP.10: For the nvidia A2000: Config Xorg to use 'modesetting' instead of 'nouveau'
# in dom0 check GPU

Gonna try today.
The problem is not so much about the ā€œhowā€ (many information on the Net, especially the MAN, but the specific names to give each LVM and LVM thin and sub-LVM, as apparently Qubes don’t like much other names.

Thank you. Can you please answer several questions about partitioning?

  • Do you know, why by default dom0 uses 20GiB size (inside 20 GiB in R4.2 and 25 GiB in R4.1 pool root-pool)? Why should users use 50GiB as you suggest?
  • Why the Qubes OS installer uses 90%FREE for vm-pool? I mean, it keeps a lot of space wasted for users. Or some kind of auto-extending is expected to work for users on this 10% space? If it’s the case, why not set 50%FREE? Or auto-extending is not reliable?
    I do not understand default 90%FREE completely, but it’s the same for several Qubes OS versions, so, it should have some meaning.

I don’t know the ā€œofficialā€ answers to those questions but from my experience, 20 or 25gb for the root pool is WAY too small. In my script I believe i have it set to 40gb which should be plenty. 50gb is probably a bit overkill.

When you install new templates, those templates are first downloaded to a temp folder inside dom0 root before being installed. If you try to install multiple templates at once OR a rather large template, you can run into issues where you run out of space and the template simply can not install at all.

Years ago in 4.1 is when I first ran into this issue and I had to figure out how to extend my root pool, which was a huge pain in the ass, so for all my future builds I just make the root pool larger to begin with. 30gb is probably enough to be safe but I prefer the extra buffer.

As to only using 90% free, I can only take a guess on this one as it’s related to SSD hard drives. The general recommendation for SSD’s is that you never fill them past 90% space or else you may run into various file system & read/write issues. Setting 90%FREE would essentially force the system to never go beyond that, it’s probably a good idea - but I’m content to pay attention to how much my disk is using manually which is why I have mine set to 99%.

Why not 100%, well as I wrote in the script comments:
# ^ 99% to leave a useful amount of free space for possible repair operations in future, feel free to use 100% instead

Well, maybe this should be posted as a github ticket, if it’s a case, and if affects all users? The current default 20 GiB should be changed then, shouldn’t it?

Interesting idea, but currently all SSD already have this extra partition size (like 25%) that is available for dying memory blocks. So, essentially keeping 10% free is in fact losing 10% in favor for making the SSD to work longer. It’s not a approach for every one.

@marmarek can you please elaborate on this 2 questions:

  • Why current Anaconda installer mod of Qubes OS users 90%FREE for vm-pool? For recovery situations? Or SSD health? 10% for 2TiB drive is a big deal, and doing so is similar to having root-reserved 5% on ext4, which is a quite outdated approach and free space loss.
  • Why dom0 is 20GiB even for huge SSDs? Maybe it should be reconsidered, taking into account the fact that all custom partitioning tutorial allocate more for root?

SSD over provisioning is for dying cell.
10% free space left on normal space is for cell rotation and cell cache operations.

SSDs have like 15% reserved that is not accessible nor visible to user right from the start. There is no reason to have manual free space in most cases. But if user wants more reserve for rotation, they can use service software to increase 15% to up to 30% or something, and still have no obligation to keep free space inside LVM. So, I hope no, loosing 10% for cell reserve would be outdated and not optimal decision.

I think 10% is left for LVM expansion in case of over-provisioning and having no space left. But in this case 10% for current sizes of SSD is TOO much. Even 1% would solve this problem.

There is.
Try for yourself.
Run speed test on non full ssd and then on 98% full.
You need to left 10% free for cache and normal cell rotation (leveled wear).
This 15% is reserved for faulty cell exchange and until fault is not used at all.

OK, did not test, but let’s say you are right.
Then why keep 10% free inside LVM and LUKS? TRIM has to work though these layers to make controller know this space is free.
Why not keep 10% free out of partitions instead?

I know that manufacturers differ, but I think this is not so. The
reserved space is used for wear levelling, garbage collection, and error
correction. It’s used all the time, not only when there is a fault.

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.