!! Version 6: The partitioning is all good, but I still can’t install as the return to the GUI freezes
I’m open to keep going in TUI but:
I’ve tried /etc/sbin/install.py, failed.
I’ve tried /etc/sbin/anaconda, failed
I’ve tried chmod, failed
Here is my post for Full TUI install
# Manual Disk Partitioning for Custom Installation of Qubes OS 4.2.x on RAID.
## Introduction:
// This guide is for manual partitioning strategy on a system with three NVMe drives (two in RAID), 128GB of RAM, and an NVIDIA A2000 GPU.
// The Qubes OS GUI installer (Anaconda) can be picky for complex LVM configurations. To ensure a robust install and avoid conflicts, this guide uses the **QOS Rescue Mode**, with all cmds executed from the **Text-based User Interface (TUI)**.
## Objectives
- Maximise RAM available for Xen and VMs
- Separate systeme ressources (swap, temp, log, cache) heavy R/W on a smaller (cheaper) disk
- Prepare infrastructure for ZFS (snapshots, compression)
- Robustness and redundancy with RAID1
- Full use of nvidia A2000
REM # STEPS.X: 1) Blocks => 2) RAID => 3) LUKS => 4) LVM => 5) LVM-T => 6) FS => 7) Text-Install
## STEP.0: Initial Setup and Device Discovery
// Upon booting the Qubes OS installer, select **"Rescue mode"** (or installer + ctrl-alt-f2) to enter the TUI.
// Check keyboard layout (Crucial for later, i.e: your password):
=> localectl status
# If it's not the one you want, try:
=> loadkeys <code> # code like: es_ES; es_MX; es_AR; ca_ES; ca_FR; fr_BE; fr_CA; fr_CH; fr_LU
# Warning ! Even if present in /usr/share/locale/* The TUI might not be ble to load it, so mapping will be default 'en_US'
// View current partitions
=> lsblk # List all block devices
// Note your device names (e.g., `/dev/nvme0n1`, `/dev/nvme1n1`, `/dev/nvme2n1`).
## STEP.1: Partitionning using gDisk, starting with reinitializing the GPT table
// OPTIONAL: CLEAN disk to avoid conflicts
// If you've had previous installations that left behind metadata (LVM, RAID, etc.),
=> sgdisk -Z (--zap-all) /dev/NVMe[1,2]n1 # sgdisk -z (--zap)) /dev/NVMe[1,2]n1) is equiv. to gDisk 'o'
# If that doesn't work (like me) you might need to use wipeFS, or a live (HBCD, MediCat, Kali, Debian, etc..) to write whatever
=> wipefs -a -f /dev/nvme[0,1,2]n1 # -a for "all" and -f for "force"
// IMPORTANT: gDisk uses MiB Mebibytes (bi for binary) for its default M, not GB Gigabit/GigaOctets (Decimal)
// For example, 1GO/GB = 953.6743164MiB; 0.93132257462 GiB; 1 GiB = 1024 MiB
// Simple formula for roounding: =if((MiB-ROUNDDOWN(MiB))>0.3,ceiling(MiB,2),mround(MiB,1))
# STEP.1A: NVMe0n1, 256GB (238.41858GiB)
=> gdisk /dev/nvme0n1 # Open gdisk on the target drive
=> o # to create a new empty GPT partition table
=> n # to create a new partition (NVMe0n1p1)
=> 1 (or Enter) # to select Partition 1:
# - First sector: Press 'Enter' (default)
# - Last sector: '+954MiB' # 0.93132GiB = ~1GB/GO, A little larger than default 600MiB since 3+ kernels
# - Hex code: 'EF00' (EFI System)
=> n # to create another partition (NVMe0n1p2)
=> 2 (or Enter) # to select Partition 2:
# - First sector: Press 'Enter' (default)
# - Last sector: '+1908M' # 1.8626GiB = ~2GB
# - Hex code: '8300' (Linux Filesystem for /boot)
=> n # to create another partition (NVMe0n1p3)
=> 3 (or Enter) # to select Partition 3:
# - First sector: Press 'Enter' (default)
# - Last sector: '+141144M' # 137.835741GiB = ~148 GB (56+24+16+40+8+4) Press 'Enter'
# - Hex code: '8E00' (Linux LVM)
=> n # to create another partition (NVMe0n1p4)
=> 4 (or Enter) # to select Partition 4:
# - First sector: Press 'Enter' (default)
# - Last sector: Press 'Enter' (use all remaining)
# - Hex code: 'bf00' (Solaris/ZFS)' # (reserved for ZFS one day)
=> w # to write changes and exit
# STEP.1B: NVMe1n1 & NVMe2n1, 2000GO / 2TB (1862.64515GiB)
=> gdisk /dev/nvme1n1 # Open gdisk on the target drive
=> o # to create a new empty GPT partition table
=> n # to create a new partition (NVMe1n1p1)
// NOTE: If you want (80GO) qubes-root and (640GO) qubes-vm on each its own MD (i.e: MD101; MD102) then these will have to be 2 separate.
=> 1 (or Enter) # to select Partition 1:
# - First sector: Press 'Enter' (default)
# - Last sector: '+686646M' # 670.55225GiB = ~720GB (80+640)
# - Hex code: 'fd00' (Linux RAID)
=> n # to create a new partition (NVMe1n1p2)
=> 2 (or Enter) # to select Partition 2:
# - First sector: Press 'Enter' (default)
# - Last sector: Press 'Enter' (use all remaining)
# - Hex code: 'bf00 (Solaris/ZFS)' #(reserved for ZFS one day)
=> w # to write changes and exit
// do the exact same (Mirror) on the second disk:
=> gdisk /dev/nvme2n1 # Open gdisk on the target drive
=> o # to create a new empty GPT partition table
=> n # to create a new partition (NVMe2n1p1)
=> 1 (or Enter) # to select Partition 1:
# - First sector: Press 'Enter' (default)
# - Last sector: '+686646M' # 670.55225GiB = 720GB (80+640)
# - Hex code: 'fd00' (Linux RAID)
=> n # to create a new partition (NVMe2n1p2)
=> 2 (or Enter) # to select Partition 2:
# - First sector: Press 'Enter' (default)
# - Last sector: Press 'Enter' (use all remaining)
# - Hex code: 'bf00 (Solaris/ZFS)' #(reserved for ZFS one day)
=> w # to write changes and exit
=> lsblk # To print the new partition table with names
// NOTE: LVM over RAID recommended (mdadm RAID) as simpler and more flexible; but LVM2 RAID can be used if specific needs => not covered here
## STEP.2: Create RAID1 (mirroring) array:
## STEP.2A: RAID1 to be used as the base for our LVM dom0
=> mdadm --create /dev/md10 --level=1 --raid-devices=2 /dev/nvme1n1p1 /dev/nvme2n1p1
# mdadm: defaulting to version 1.2 metadata
# mdadm: array /dev/md10 started
// Confirm the RAID array is active and includes the correct devices.
=> lsblk /dev/md10 # To check/confirm details
// NOTE: If you chose to have "each its own" partitions in step 1B, the step.2 have to be done again for the second partition.
## STEP.2B: If you created a second partition for a future ZFS setup, create a second RAID array for them.
=> mdadm --create /dev/md20 --level=1 --raid-devices=2 /dev/nvme1n1p2 /dev/nvme2n1p2
# mdadm: defaulting to version 1.2 metadata
# mdadm: array /dev/md20 started
// Confirm the RAID array is active and includes the correct devices.
=> lsblk /dev/md10* (or md20*) # Same with more details
// OPTIONAL: If you choose to have md (RAID) partitions inside a single block partition, the following operation is for you, ie: md10p1, md10p2 => NOT covered here.
// OPTIONAL: Create partitions on RAID (for dom0)
// OPTIONAL: => parted -s -a optimal -- /dev/md10 mklabel gpt
// OPTIONAL: First partition (for root)
// OPTIONAL: => parted -s -a optimal -- /dev/md10 mkpart primary 0% 11% # That makes /dev/md10p1
// OPTIONAL: Second partition (for VMs)
// OPTIONAL: => parted -s -a optimal -- /dev/md10 mkpart primary 11% 100% # That makes /dev/md10p2
// OPTIONAL: => parted -s -- /dev/md10 align-check optimal 1
// OPTIONAL: Create partitions on RAID (for ZFS for later)
// OPTIONAL: => parted -s -a optimal -- /dev/md20 mklabel gpt
// OPTIONAL: => parted -s -a optimal -- /dev/md20 mkpart primary 0% 100%
## STEP.2C: Saving the RAID config
// CRITICAL step. It saves the RAID config so the array is auto reassembled on reboot, preventing boot failures.
=> mdadm --detail --scan >> /etc/mdadm.conf # You might need to adapt the path, i.e: external USB
REM: NOTE: LVM on LUKS recommended // LUKS on LVM is less common and for specific uses (i.e: diff. keys per volumes) => not covered here
# OPTIONAL: cmd to assess the fastest encryption protocol specific to your system and change;
=> cryptsetup benchmark (-c <cipher>)
# # We use strong, modern encryption parameters for robustness and performance.
# The iterations <count> (-i 8888) means that much iter regardless of time (depending on CPU),
# while modern "time" <ms> (--iter-time 8888) means 8,8sec. regardless of iterration count (Independent of CPU)
## STEP.3: Crypto Setup LUKS ('luks_boot'*, luks_root, luks_srun) based on 8 core & 128GB RAM:
## STEP.3A: Crypto Setup LUKS 'LUKS_ROOT'
=> cryptsetup luksFormat --cipher aes-xts-plain64 --key-size 512 --hash sha3-512 --pbkdf argon2id --iter-time 4444 -y /dev/md10
=> cryptsetup open /dev/md10 LUKS_ROOT # Open encrypted partition as 'luks_root'
# OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_ROOT status=progress // This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
=> cryptsetup luksDump /dev/md10 # To verify LUKS header info
=> cryptsetup status LUKS_ROOT # To get the status of the newly created LUKS
## STEP.3B: Crypto Setup LUKS 'LUKS_SRUN' # If you want to have SysRun (Swap, temp, log, cache, ) encrypted
# We use smaller key sizes here since this partition is not a primary target for data compromise.
=> cryptsetup luksFormat -c aes-xts-plain64 -s 256 --hash sha256 --pbkdf argon2id -i 1500 -y /dev/nvme0n1p3
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN # Open encrypted partition as 'luks_sys-run'
# OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_SRUN status=progress // This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
// ** STEP.3C: Crypto Setup LUKS 'luks_Boot' (NOT recommanded, QUBES doesn't like it much, leading to many potential difficulties (brick !))
// OPTIONAL: If you want to have Boot (only luks1) encrypted !! not covered here, at your own risk ;)
// => => cryptsetup luksFormat --type luks1 -c aes-xts-plain64 --key-size 256 --pbkdf pbkdf2 --i 1500 -y /dev/nvme0n1p2
// => cryptsetup open /dev/nvme0n1p2 LUKS_BOOT # Open encrypted partition as 'luks_boot'
// OPTIONAL: dd if=/dev/zero of=/dev/mapper/LUKS_BOOT status=progress // This will allocate block data with zeros. Outside world will see this as random data i.e. it protect against disclosure of usage patterns.
REM: NOTE: Strongly suggest to do Header backup after first boot into dom0; see STEP.X
## STEP.4: LVM; hierarchy: PV => VG => LV
# STEP.4A, PV create (physical volume)
// if you have encrypted boot: => pvcreate /dev/mapper/LUKS_BOOT
// These LUKS are based on RAID md10 and is seen as a block device. Hence using /dev/mapper/LUKS_XXXX instead of a partition name.
=> pvcreate /dev/mapper/LUKS_ROOT # for Root+VM (/var/lib/qubes)
=> pvcreate /dev/mapper/LUKS_SRUN # for System-Run (swap, temp, log, cache, )
// Verify that the PVs have been created successfully.
=> pvs
# STEP.4B, VG create (volume group)
// if you have encrypted boot: => vgcreate qubes_boot /dev/mapper/LUKS_BOOT
// We create a VG for dom0 (Qubes OS core) and one for dom1 (I/O-heavy system volumes).
=> vgcreate qubes_dom0 /dev/mapper/LUKS_ROOT
=> vgcreate qubes_dom1 /dev/mapper/LUKS_SRUN
// Verify the new VGs.
=> vgs
# STEP.4C, LV create (logical volume) "Thick"
// if encrypted => lvcreate -n boot -L 2048M qubes_boot # Create 2GB boot volume in VG boot
// These volumes for system directories will be on the 'qubes_dom1' VG. They are "thick" because their full size is allocated upfront.
=> lvcreate -n swap -L 53406M qubes_dom1 # Create 52.1528GiB (56GB) swap volume in VG dom1
=> lvcreate -n tmp -L 22888M qubes_dom1 # Create 22.3512GiB (24GB) temp1 volume in VG dom1
=> lvcreate -n var-tmp -L 15260M qubes_dom1 # Create 14.90116GiB (16GB) temp2 volume in VG dom1
=> lvcreate -n var-log -L 38148M qubes_dom1 # Create 37.2529GiB (40GB) log volume in VG dom1
=> lvcreate -n var-cache -L 7630M qubes_dom1 # Create 7.45058GiB ( 8GB) cache volume in VG dom1
=> lvcreate -n var-lib-xen -l 100%FREE qubes_dom1 # Create 3.xxxGiB ( 4GB) xen volume in VG dom1
// Verify that all "thick" volumes were created.
=> lvs qubes_dom1
# STEP.4D, LV create (logical volume) "Thin Pool"
=> lvcreate -T -L 76294M qubes_dom0/root-pool # Create 74.5058GiB (~80GB) Thin-pool for root in VG dom0
=> lvcreate -T -l 100%FREE qubes_dom0/vm-pool # Create 596.04375GiB (~640GB) Thin-pool for VM in dom0
// Verify the creation of the thin pools.
=> lvs qubes_dom0
## STEP.4.E: Create "Thin" Logical Volumes from the Pools
// First, find the actual size of the .root-pool. and 'vm-pool'
=> lvs -o +lv_size --units m # Get the exact size of pool (needed for next step)
// Then create the 'root' TLV (Thin Logical Volume) from pool
// As the Thin LVM usese space for metadata, the size will slightly differ.
=> lvcreate -V <size from 4E.1>M -T qubes_dom0/root-pool -n root # Create ~79GB root logical volume
=> lvcreate -V <size from 4E.1>M -T qubes_dom0/vm-pool -n vm # Create ~639GB vm logical volume (exact size in '4E')
// Final check of all LVM volumes.
=> lvs
## STEP.5: Making a FileSystem for/in each volume
// We use '-m 0' to force no reservation of space for superuser
// We use '-E lazy_itable_init=0' to initiate inodes right away
// we add 'lazy_journal_init=0' for EXT4 to initiate a journal right away
=> mkfs.fat -F 32 /dev/nvme0n1p1 # for ESP
=> mkswap /dev/qubes_dom1/swap
# Format Sys-Run volumes with ext2 to minimize write operations and extend NVMe lifespan.
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/tmp # EXT2 to minimise write operations
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/var-tmp # EXT2 to minimise write operations
=> mkfs.ext2 -m 0 -E lazy_itable_init=0 /dev/qubes_dom1/var-log # EXT2 to minimise write operations
# Format other volumes with ext4 for journaling and better data integrity.
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/nvme0n1p2 # for /boot
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom1/var-cache
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom1/var-lib-xen
=> mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom0/root
// => mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/qubes_dom0/vm
# ^ above line is a 'thin_pool' managed by Qubes, as such is not intended to be formatted at all
REM # STEP.6: Mount volumes and prepare for installation
## STEP.6A: Create the top-level mount point.
=> mkdir -p /mnt/qubes
# Mount the 'root' volume. This becomes the foundation of your new system.
=> mount /dev/qubes_dom0/root /mnt/qubes
## STEP.6B: NOW, create the complete directory hierarchy on your new filesystem.
=> mkdir /mnt/qubes/boot # FS exists in NVMe0n1p2
=> mkdir /mnt/qubes/boot/efi # FS exists in NVMe0n1p1
=> mkdir /mnt/qubes/tmp # FS exists in dom1
=> mkdir -p /mnt/qubes/var/tmp # FS exists in dom1
=> mkdir /mnt/qubes/var/log # FS exists in dom1
=> mkdir /mnt/qubes/var/cache # FS exists in dom1
=> mkdir -p /mnt/qubes/var/lib/xen # FS exists in dom1
// => mkdir /mnt/qubes/var/lib/qubes # FS does NOT exists in dom0
// ^ above line will be managed by Qubes, as such is not intended to be mounted at all
## STEP.6C: Now mount the volumes into their correct locations (qubes_dom1 = all the I/O intensive stuff)
=> mount /dev/nvme0n1p2 /mnt/qubes/boot
=> mount /dev/nvme0n1p1 /mnt/qubes/boot/efi
=> mount /dev/qubes_dom1/tmp /mnt/qubes/tmp
=> mount /dev/qubes_dom1/var-tmp /mnt/qubes/var/tmp
=> mount /dev/qubes_dom1/var-log /mnt/qubes/var/log
=> mount /dev/qubes_dom1/var-cache /mnt/qubes/var/cache
=> mount /dev/qubes_dom1/var-lib-xen /mnt/qubes/var/lib/xen
// Mount VM storage (thin pool - this is where all VMs will be stored)
// This volume will not be mounted, as it doesn't have FS and will be managed by Qubes installer GUI
// => mount /dev/qubes_dom0/vm /mnt/qubes/root/var/lib/qubes
# STEP.6D: Activate swap
swapon /dev/qubes_dom1/swap
# STEP.6E: Verify everything is mounted correctly
=== Mount verification ===
=> df -h | grep /mnt/qubes
=== Swap verification ===
=> cat /proc/swaps
=== Block devices overview ===
=> lsblk
# STEP.6F: Update system configuration files for persistence
=== Updating RAID configuration ===
=> mdadm --detail --scan | tee -a /etc/mdadm.conf
=== LVM volume groups status ===
=> vgchange -ay qubes_dom0
=> vgchange -ay qubes_dom1
=> lvs; vgs
REM # This is where you should go back (ctrl-alt-F6) to the GUI installer to finalize install
# But as it doesn't work (at least for me, it freezes and crash) we will keep going in TUI
## STEP.7: Prepare for actuall installGUI handover (NO REBOOT!)
# Keep LUKS containers OPEN for GUI detection
# DON'T close LUKS_ROOT and LUKS_SRUN !
# Return to GUI installer (Ctrl+Alt+F6)
# GUI should now detect your LVM thin volumes!
## STEP.8: Emergency Reboot Recovery (if you MUST reboot before GUI handover)
## STEP.8A: The short (dirty) way:
=> Reboot
=> mdadm --assemble --scan
=> cryptsetup open /dev/md10 LUKS_ROOT
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN
# Re-do 6B 6C 6D
## STEP.8B The long and clean way:
## STEP.8B1: Deactivate swap
=> swapoff /dev/qubes_dom1/swap
## STEP.8B2: Unmount all filesystems in reverse order (deepest first)
=> umount /mnt/qubes/root/var/lib/qubes
=> umount /mnt/qubes/root/var/lib/xen
=> umount /mnt/qubes/root/var/cache
=> umount /mnt/qubes/root/var/log
=> umount /mnt/qubes/root/var/tmp
=> umount /mnt/qubes/root/tmp
=> umount /mnt/qubes/root
## STEP.8B3: Close LUKS containers
=> cryptsetup close LUKS_ROOT
=> cryptsetup close LUKS_SRUN
## STEP.8B4: Stop RAID arrays (optional, but safer)
=> mdadm --stop /dev/md10
# mdadm --stop /dev/md20 # if you created md20
## STEP.8B5: Now safe to reboot
=> sync
=> reboot # or shutdown now and restart
## STEP.8C: AFTER REBOOT - Recovery procedure
# Boot again into installer/rescue mode, then Ctrl+Alt+F2
## STEP.8C1: Reassemble RAID arrays
=> mdadm --assemble --scan
# OR manually if --scan doesn't work:
=> mdadm --assemble /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
## STEP.8C2: Verify RAID status
=> cat /proc/mdstat
=> lsblk # To check with a global picture
## STEP.8C3: Reopen LUKS containers (you'll need to enter passphrases again)
=> cryptsetup open /dev/md10 LUKS_ROOT
=> cryptsetup open /dev/nvme0n1p3 LUKS_SRUN
## STEP.8C4: Activate LVM volume groups
=> vgchange -ay qubes_dom0
=> vgchange -ay qubes_dom1
## STEP.8C5: Verify LVM status
=> lvs
=> vgs
## STEP.8C6: Recreate mount points
Re-do STEP.6B 6C 6D
## STEP.8C7: Final verification
=== Recovery Status ===
=> cat /proc/mdstat
=> df -h | grep /mnt/qubes
=> cat /proc/swaps
=> lvs; vgs
## STEP.8C8: Update configurations again
=> mdadm --detail --scan | tee -a /etc/mdadm.conf
// RECOVERY COMPLETE; Your setup has been fully restored after reboot
# Return to GUI installer with: Ctrl+Alt+F6
REM # TROUBLESHOOTING NOTES:
# If mdadm --assemble --scan fails:
# - Try: mdadm --assemble /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
# - Check: cat /proc/mdstat
# - Force if needed: mdadm --assemble --force /dev/md10 /dev/nvme1n1p1 /dev/nvme2n1p1
#
# If LUKS containers won't open: # that goes for md20 <LUKS_SRUN> too
# - Verify: cryptsetup isLuks /dev/md10
# - Check: cryptsetup luksDump /dev/md10
# - Retry with: cryptsetup open --type luks2 /dev/md10 LUKS_ROOT
#
# If LVM won't activate: # That goes for qubes_dom1 too
# - Scan: pvscan
# - Activate: vgchange -ay --select vg_name=qubes_dom0
# - Check: vgdisplay qubes_dom0
## STEP.9: Post-Install Security (and adjust brightness)
=> xrandr --output eDV-1 --brightness 0.35 --gamma 1.15:0.75:0.65 # Gamma R-G-B depending on time of day
# NOTE: It might be a good idea to do Header backup now
=> cryptsetup luksHeaderBackup /dev/md10 --header-backup-file /root/luks-header-md10.backup
=> cryptsetup luksHeaderBackup /dev/nvme0n1p3 --header-backup-file /root/luks-header-srun.backup
## Extras for my P15, executed in dom0, NOT RECOMMANDED, unless you know, you know ;). FYIO, DYOR.
## STEP.10: For the nvidia A2000: Config Xorg to use 'modesetting' instead of 'nouveau'
# in dom0 check GPU