Automated Qubes OS Installation using Kickstart and/or PXE Network Boot

For anyone curious about how to batch-install Qubes OS, this guide by the Fedora Project is incredibly helpful:


https://docs.fedoraproject.org/en-US/fedora/latest/install-guide/advanced/Kickstart_Installations/#chap-kickstart-installations


https://docs.fedoraproject.org/en-US/fedora/latest/install-guide/advanced/Network_based_Installations/#chap-pxe-server-setup


https://docs.fedoraproject.org/en-US/fedora/latest/install-guide/advanced/Boot_Options/#sect-boot-options-kickstart


This will be helpful for:

  • Anyone considering deploying Qubes OS in a work environment

    • Quick and easy remote installation of work machines
    • Can be deployed onto a PXE server, via NFS, FTP, HTTP and HTTPS
    • Initial Install and First Boot options can be completely customised.
    • Every Qubes OS install already comes with Kickstart files. For anyone curious, check your dom0 root directory for /root/anaconda-ks.cfg and /root/initial-setup-ks.cfg.
    • The PXE server could even potentially be an AppVM on another Qubes OS machine :scream:
  • Anyone who needs a “quick and painless” zero-interaction way to install Qubes OS

    • Maybe your laptop is lost/stolen/damaged/seized, and you want to get back your custom-configured Qubes OS back up and running ASAP. Just plug it in, PXE network boot, go get a coffee, and you’ll have your Qubes OS machine back before you know it!
    • Maybe you created an awesome Qubes OS which you’ve spent a long time configuring and tweaking, and you want to share it with other people. You could set up a PXE server with the ISO and your Kickstart file. All the other person needs to do is boot from your PXE server, and the installer does the rest!
    • Maybe you just want to run your Qubes OS dom0 in RAM. With a bit of tweaking this guide could be adapted to that purpose…

I’m writing a Qubes OS-specific guide derived from these guides, and I will upload it when it’s ready. (It’s on my list of things to do :kissing:)

Now all we need is a GUI tool for developing SaltStack configs that create 100% custom VMs (I’m not going to lie, I’m a little confused by the .top and .sls files, where they go, why they can’t just be a single file etc., but I’m trying!), and we’ve got a 100% automated, easy-to-use, and zero-interaction method to set up every single aspect of a Qubes OS machine :sunglasses:

5 Likes

@alzer89 This work you’re doing here is very awesome and very useful!

I can’t wait to use your Qubes-specific guide for the steps on how to implement this in Qubes.

Here is what I’d love to do with this:

I’d love to have a custom configured Live/tmpfs/RAM-based Qubes OS sitting on a network share (NFS, etc), where I can then network boot from this OS image from several different computers that then do not need any storage drive in them.

The intention is to create Disposable Qubes OS physical machines that have no storage drives in them.

This way, the Qubes OS config can be centrally updated and deployed, and also any data files can be stored & accessed over the network too.

The endpoint user computers would then store nothing on them and need no storage drives within them, similar to Disposable Qubes, but for the entire physical machines.

This would be a great centralized infrastructure setup for family homes, company offices, or advanced personal security isolation setups.

Can’t wait to start seeing your Qubes-specific guide come to life now soon.

Thank you!

2 Likes

@alzer89 Here are some Qubes Salt resources that may be useful to your work on this:

hxxps://qubes-os.org/doc/salt

hxxps://forum.qubes-os.org/t/how-do-i-setup-phoenix-qubes-with-salt/3551

hxxps://kushaldas.in/posts/maintaining-your-qubes-system-using-salt-part-1.html

hxxps://docs.gonzalobulnes.com/configuration_management.html

hxxps://forum.qubes-os.org/t/packaging-salt-states-formulas-for-use-in-qubes-os/2609

hxxps://github.com/unman/notes/blob/master/salt/Controlling_Qubes
hxxps://github.com/unman/notes/tree/master/salt/examples
hxxps://github.com/unman/shaker

1 Like

@qstateless, just an update on how this is going:

PXE boot on the latest ISO using memdisk

  • Fails using Legacy Boot
    • ISOLINUX: Failed to load ldlinux.c32
  • Have not tried UEFI yet, but that’s next

“Deconstructing” the ISO and serving via NFS

  • Fails
    • xen.gz loaded successfully
    • vmlinuz loaded successfully
    • initrd.img loaded successfully
    • Boots into the Plymouth splash screen
    • Hangs at “Reached target Basic System”
  • Cause of hanging is yet to be determined
  • I assume because the initramfs doesn’t have network modules (this is purely speculation), or because Xen didn’t pass through any network devices

“Deconstructing” the ISO and serving via HTTP

  • Same as serving via NFS

“Deconstructing” the ISO (Qubes.iso → /LiveOS/squashfs.img/LiveOS/rootfs.img) and serving using NFS, but without loading xen.gz

  • Somewhat successful
    • Boots successfully into anaconda installer
    • RPM repos cannot be accessed
    • Booting without Xen isn’t exactly ideal (the installer uses it to check whether the hardware is suitable for Qubes OS)

——-

The plan is to create a sort of sys-pxe Qube that will turn any Qubes OS machine into a PXE boot server. That way, you’d be able to install Qubes OS onto another machine using an existing Qubes OS machine.

There’s still a long way to go on this one. I still have to get it to successfully boot and install…. :smile:

The long-term plan is also:

  • To create a way to customise the install (well, technically there already is a way, I just have to configure it :laughing:)
    • Add/remove custom RPM packages in the installer repo
  • Utilise kickstart your facilitate automated unattended installs
    • User name and password
    • Disk partitioning
    • LUKS encryption
    • Timezone
    • Keyboard and language support
    • Everything else that anaconda can do
    • Just turn on the target machine, connect the Ethernet cable, select “Network Boot”, and go have a coffee while it automatically installs
  • Couple this with Saltstack to allow complete customisation of preconfigured Qubes during first boot setup

See this for more information (full credit to @unman, the living legend):

  • Create a sort of GUI tool that will write up Salt config files for custom Qubes
    • The user has dropdown menus for things like base template, installed software, PCI devices, etc.
    • Maybe one day it might get merged into the dialog box at Q-Menu → Qubes Tools… → Create Qubes VM… (maybe…)

———

Also, if anyone else sees this and can help, please post or DM me :slight_smile:

Another update:

I have just gotten it to fully boot an autonomous/unattended install using Kickstart in both legacy and UEFI network boot, with an unaltered Qubes OS ISO, with Xen multiboot, the way it was intended to be installed.

Twelve machines at the same time all kickstarted, and not that much slower than a USB boot, which looks pretty promising.

I will be posting everything over the coming days, and would love for anyone to test it out, and would love any feedback.

I am working on several possible delivery options:

  • A standalone fedora (minimal) template containing everything you need, that the user can base their sys-net on, and it just “works”
    • This one is ready to go
    • Doesn’t require an internet connection
    • Unfortunately it’s 6GB (including the Qubes OS ISO, of course)
  • A bash script to download, install and configure everything, that the user will run from inside sys-net (or whatever Qube they wish)
    • Can also be done easily
    • Requires a internet connection to provision
    • Not very user-friendly for anyone who isn’t comfortable with the terminal
    • Not very user-friendly for anyone who doesn’t fully understand the Qubes OS architecture
  • A Salt pillar to build a “sys-pxe” from scratch, that the user can invoke when required
    • Could be integrated directly into the existing tools and configs for provisioning Qubes (sys-gui, sys-gui-gpu, sys-audio, sys-usb, etc.)
    • Requires an internet connection to provision
    • I’m still getting my head around what configs go in .top files, and what configs go in .sls files :sweat_smile:
  • Incorporation into a Qubes OS Installer ISO that you boot from a USB stick, turning that machine into a network boot server, serving the ISO via the Ethernet port
    • Doesn’t require an internet connection
    • Straightforward and intuitive for the end user
    • Can be a GRUB boot menu option in the ISO
    • Requires fairly heavy modification of the Qubes OS ISO

Hopefully this will make Qubes OS more appealing to:

  • Sys-admins when provisioning work machines for employees
  • Anyone wishing to “migrate” their current Qubes OS install to another machine
  • Anyone needing to “qubify” another machine
  • Anyone who uses “burner laptops” of the same model, and needs to get their machine back to a good known state
  • Anyone who doesn’t have a USB flash drive on hand, but has Ethernet cables galore
  • Many other use cases that I haven’t thought of, but someone else will find

Not exactly a tool for creating formulas, but still related:

I hope we agree we definitely need a way to install Qubes on modern laptops (even desktops) not from USB or via USB.

I have written some salt files to:

  1. Check to make sure that fedora-36-minimal is installed, and install it if it isn’t
  2. Clone it into sys-pxe-template (this is the bit I’m having trouble with)
  3. Change sys-pxe-template into a disposable VM template
  4. Install the following packages into sys-pxe-template:
  • nfs-server
  • tftp-server
  • dhcp-server
  • syslinux
  1. Put the following config files into sys-pxe-template:
  • /etc/dhcp/dhcpd.conf

    • Configure DHCP server with subnet of 192.168.100.0/24, and TFTP boot server
  • /var/lib/tftpboot/*

    • All the PXE files from syslinux
    • The pxelinux.cfg and custom grub2 EFI files from Qubes OS /boot directory (needed to multiboot Xen)
  • /usr/local/bin/download-and-verify-latest-qubes-iso

    • Bash script to download and verify the latest Qubes OS ISO
  • /etc/systemd/system/download-and-verify-latest-qubes-iso.service

    • Systemd service to automatically run download-and-verify-latest-qubes-iso when the Qube starts
  • /etc/systemd/network/20-all-interfaces.network

    • Set static IP address of sys-pxe to 192.168.100.1
    • Disable Qubes virtual NICs because they aren’t needed

All of these config files are actually written in plain text in the *.sls files for transparency and to keep the install size down.

  1. Create a disposable VM called sys-pxe, and base it on sys-pxe-template
  • All the user has to do is open this DispVM, and their ethernet NIC automatically becomes a Qubes OS PXE Network Boot Server!

These are the salt files:

sys-pxe.top

# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :

# Installs 'sys-pxe' Qubes OS ISO Network Boot Server Qube.
#
# Pillar data will also be merged if available within the ``qvm`` pillar key:
#   ``qvm:sys-pxe``
#
# located in ``/srv/pillar/dom0/qvm/init.sls``
#
# Execute:
#   qubesctl top.enable qvm.sys-pxe
#   qubesctl --all state.highstate

{% if salt['pillar.get']('qvm:sys-pxe:name', 'sys-pxe') != salt['pillar.get']('qvm:sys-gui:name', 'sys-gui') %}
{% set vmname = salt['pillar.get']('qvm:sys-pxe:name', 'sys-pxe') %}
{% else %}
{% set vmname = salt['pillar.get']('qvm:sys-gui:name', 'sys-gui') %}
{% endif %}

base:
  dom0:
    - match: nodegroup
    - qvm.sys-pxe
  {{ salt['pillar.get']('qvm:sys-pxe:template', 'fedora-36-minimal') }}:
    - qvm.sys-pxe-template
  {{ vmname }}:
    - qvm.sys-pxe-vm

sys-pxe-template.sls

# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :

##
# qvm.sys-pxe-template
# ====================
##

fedora-36-minimal:
  qvm.template_installed: []

#sys-pxe-template:
#  qvm.template_installed: []

#dom0:
#  cmd.run:
#    - qvm-clone fedora-36-minimal sys-pxe-template

sys-pxe-template:
  qvm.vm:
    - present:
      - label: black
    - prefs:
      - label: black
      - dispvm-allowed: True
    - features:
      - enable:
        - appmenus-dispvm

  pkg.installed:
    - pkgs:
      - qubes-core-admin-client
      - syslinux
      - tftp-server
      - nfs-server
      - dhcp-server
      - systemd-networkd

  /var/lib/tftpboot/pxelinux.cfg/default:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          default vesamenu.c32
          timeout 100
          
          menu background splash.png
          
          label Qubes-Manual
          menu label ^Qubes OS 4.1.1 - Automated Install - Kickstart
          kernel mboot.c32
          append qubes/xen.gz console=none --- qubes/vmlinuz inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles ip=dhcp inst.ks=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/anaconda-ks.cfg i915.alpha_support=1 quiet rhgb --- qubes/initrd.img
          
          label Qubes-Manual 
          menu label ^Qubes OS 4.1.1 - Manual Install - Regular ISO Boot
          kernel mboot.c32
          append qubes/xen.gz console=none --- qubes/vmlinuz inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles ip=dhcp i915.alpha_support=1 quiet rhgb --- qubes/initrd.img
          
          label local
          menu label Boot from ^Local Drive
          localboot 0xffff
  
  /var/lib/tftpboot/EFI/qubes/grub.cfg:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          function load_video {
          	insmod vga
          	insmod vbe
          	insmod efi_gop
          	insmod efi_vga
          	insmod ieee1275_fb
          	insmod video_bochs
          	insmod video_cirrus
          	insmod all_video
          }
          
          load_env
          
          load_video
          set gfxpayload=keep
          insmod gzio
          
          insmod gfxterm
          insmod gfxtext


  /etc/systemd/network/20-all-interfaces.network:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          # Match all ethernet NICs
          [Match]
          Type=ether
          # For some reason, this network device exists and causes TFTP to fail if it's active...
          Name=!enX0
          
          # Set Ethernet NIC static IP address of 192.168.100.1
          [Network]
          Address=192.168.100.1/24
          # May not be necessary, but including them just in case
          Gateway=192.168.100.1
          DNS=192.168.100.1

   /etc/dhcp/dhcpd.conf:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          option arch code 93 = unsigned integer 16;
          # If you are planning to PXE install Qubes OS on more than 253 machines at the same time, you are absolutely insane, and I love it!
          # If you are, then you probably know how to edit this config file to expand the subnet...
          subnet 192.168.100.0 netmask 255.255.255.0 {
          authoritative;
          default-lease-time 600;
          max-lease-time 7200;
          ddns-update-style none;
          option domain-name-servers 192.168.100.1;
          option routers 192.168.100.1;
          option broadcast-address 192.168.100.255;
          option subnet-mask 255.255.255.0;
          range 192.168.100.2 192.168.100.254;
          if option arch = 00:07 {
          # amd64 UEFI
          filename "uefi/shimx64.efi";
          next-server 192.168.100.1;
          } else if option arch = 00:0b {
          # aarch64 UEFI
          filename "uefi/shimaa64.efi";
          server-name 192.168.100.1;
          } else {
          filename "pxelinux.0";
          next-server 192.168.100.1;
          }
          
          }


  /var/lib/tftpboot/EFI/qubes/themes/qubes/theme.txt:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          # Copyright (C) 2016 Harald Sitter <sitter@kde.org>
          #
          # This program is free software; you can redistribute it and/or
          # modify it under the terms of the GNU General Public License as
          # published by the Free Software Foundation; either version 3 of
          # the License or any later version accepted by the membership of 
          # KDE e.V. (or its successor approved by the membership of KDE
          # e.V.), which shall act as a proxy defined in Section 14 of
          # version 3 of the license.
          #
          # This program is distributed in the hope that it will be useful,
          # but WITHOUT ANY WARRANTY; without even the implied warranty of
          # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
          # GNU General Public License for more details.
          #
          # You should have received a copy of the GNU General Public License
          # along with this program.  If not, see <http://www.gnu.org/licenses/>.
          
          # paperwhite - #fcfcfc
          # icongrey - #4d4d4d
          # plasmablue - #3daee9
          # black - #000000
          
          # Global Property
          # General settings
          title-text: ""
          title-font: "Unifont Regular 14"
          message-font: "Unifont Regular 14"
          message-color: "#7f8c8d"
          message-bg-color: "#4d4d4d" # TODO: whatever is this for?
          desktop-image: "qubes.png"
          
          # title
          # NOTE: can't put this in a vbox because GRUB is crap and item highlighting
          #   is broken if you put the boot_menu in a vbox...
          # TODO: file bug report
          + label {
              top = 50%-225 # (150+43+32) menu + height + spacer
              left = 0%
              width = 100%
              text = "Qubes OS Installer - Network Boot"
              align = "center"
              font = "Unifont Regular 32"
              color = "#ffffff"
          }
          
          # Show the boot menu
          + boot_menu {
              left = 50%-200
              width = 450
              # NB: this is scooped upwards from the middle.
              #     effectively 50px are below and the remaining 150 above
              top = 50%-150
              height = 200
              # Icon
              icon_width = 4
              icon_height = 0
              # Item
              item_height = 33
              item_padding = 1
              item_icon_space = 0
              item_spacing = 1
              item_font =  "Unifont Regular 16"
              item_color = "#4d4d4d"
              selected_item_font = "Unifont Bold 16"
              selected_item_color = "#ffffff"
          }
          
          + vbox {
              left = 50%-200 # same as menu
              top = 50%+113 # (50+16+19+28) half menu + spacer + progress + spacer
              width = 400 # same as menu
              + label { width = 400 align = "center" color = "#4d4d4d" font = "Unifont Regular 14" text = "[Enter] Boot the selected OS" }
              + label { width = 400 align = "center" color = "#4d4d4d" font = "Unifont Regular 14" text = "[Up and Down Key] navigation" }
              + label { width = 400 align = "center" color = "#4d4d4d" font = "Unifont Regular 14" text = "[E] Edit Selection" }
              + label { width = 400 align = "center" color = "#4d4d4d" font = "Unifont Regular 14" text = "[C] GRUB Command Line" }
          }
          
          # Show a styled horizontal progress bar
          + progress_bar {
              id = "__timeout__"
              left = 0
              top = 100%-32
              width = 100%
              height = 32
              show_text = false
              bar_style = "progress_bar_*.png"
              highlight_style = "progress_bar_hl_*.png"
          }
          
          # Show text progress bar
          + progress_bar {
              id = "__timeout__"
              left = 50%-200 # same as menu
              top = 50%+66 # (50+16) half menu + spacer
              width = 400 # same as menu
              height = 19 # 14pt
              show_text = true
              font = "Unifont Regular 14"
              text_color = "#4d4d4d"
              align = "center"
              text = "@TIMEOUT_NOTIFICATION_MIDDLE@"
              bar_style = "progress_bar2_*.png"
          }

  /etc/exports:
    file.managed:
      - user: root
      - mode: 644
      - makedirs: True
      - contents: |
          /var/lib/tftpboot/qubes/iso	*(rw,no_root_squash)






 
  cmd.run:
    - name: systemctl enable tftp.service
    - name: systemctl enable tftp.socket
    - name: systemctl enable systemd-networkd
    - name: exportfs -r
    - name: systemctl enable nfs-server
    - name: systemctl enable dhcpd
# TODO:  Add more stuff to put the syslinux files in /var/lib/tftpboot

sys-pxe-vm.sls (needs a bit of work)

# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :

##
# qvm.sys-pxe-vm
# ==============
##

# WIP: currently use default user 'user'
/var/lib/tftpboot/pxelinux.cfg/default:
  file.managed:
    - user: root
    - mode: 644
    - makedirs: True
    - contents: |
        default vesamenu.c32
        timeout 100
        
        menu background splash.png
        
        label Qubes-Manual
        menu label ^Qubes OS 4.1.1 - Automated Install - Kickstart
        kernel mboot.c32
        append qubes/xen.gz console=none --- qubes/vmlinuz inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles ip=dhcp inst.ks=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/anaconda-ks.cfg i915.alpha_support=1 quiet rhgb --- qubes/initrd.img
        
        label Qubes-Manual 
        menu label ^Qubes OS 4.1.1 - Manual Install - Regular ISO Boot
        kernel mboot.c32
        append qubes/xen.gz console=none --- qubes/vmlinuz inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles ip=dhcp i915.alpha_support=1 quiet rhgb --- qubes/initrd.img
        
        label local
        menu label Boot from ^Local Drive
        localboot 0xffff

/var/lib/tftpboot/EFI/qubes/grub.cfg:
  file.managed:
    - user: root
    - mode: 644
    - makedirs: True
    - contents: |
        function load_video {
        	insmod vga
        	insmod vbe
        	insmod efi_gop
        	insmod efi_vga
        	insmod ieee1275_fb
        	insmod video_bochs
        	insmod video_cirrus
        	insmod all_video
        }
        
        load_env
        
        load_video
        set gfxpayload=keep
        insmod gzio
        
        insmod gfxterm
        insmod gfxtext
        
        terminal_output gfxterm
        insmod gfxmenu
        loadfont $prefix/themes/qubes/unifont-bold-16.pf2
        loadfont $prefix/themes/qubes/unifont-regular-14.pf2
        loadfont $prefix/themes/qubes/unifont-regular-16.pf2
        loadfont $prefix/themes/qubes/unifont-regular-32.pf2
        insmod png
        set theme=$prefix/themes/qubes/theme.txt
        export theme
        
        set timeout_style=menu
        set timeout=10
        
        menuentry 'Qubes OS 4.1.1 - Automated Install - Kickstart'  --class fedora --class gnu-linux --class gnu --class os {
        #	echo "To make a tasty Qubes OS..."
        #	echo "We start with the Xen Hypervisor..."
        	echo "Loading Xen Hypervisor..."
        	multiboot2 qubes/xen.gz console=none
        #	echo "...add a whole raw Linux Kernel..."
        	echo "Loading Linux Kernel..."
        	module2 qubes/vmlinuz ip=dhcp inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles i915.alpha_support=1 quiet rhgb inst.ks=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/anaconda-ks.cfg
        #	echo "...infuse it with your Kickstart Config, for extra flavoury goodness..."
        	echo "Adding Kickstart File...l"
        #	echo "...and finally, sprinkle it with a robust Initramfs..."
        	echo "Loading Initramfs..."
        	module2 qubes/initrd.img
        #	echo "...and we have a delicious Qubes OS.  Enjoy ;-)"
        }
        
         
        menuentry 'Qubes OS 4.1.1 - Manuak Install - Regular ISO Boot'  --class fedora --class gnu-linux --class gnu --class os {
        #	echo "To make a tasty Qubes OS..."
        #	echo "We start with the Xen Hypervisor..."
        	echo "Loading Xen Hypervisor..."
        	multiboot2 qubes/xen.gz console=none
        #	echo "...add a whole raw Linux Kernel..."
        	echo "Loading Linux Kernel..."
        	module2 qubes/vmlinuz ip=dhcp inst.stage2=nfs:192.168.100.1:/var/lib/tftpboot/qubes/iso/Qubes.iso plymouth.ignore-serial-consoles i915.alpha_support=1 quiet rhgb
        #	echo "...and finally, sprinkle it with a robust Initramfs..."
        	echo "Loading Initramfs..."
        	module2 qubes/initrd.img
        #	echo "...and we have a delicious Qubes OS.  Enjoy ;-)"
        }
        
        menuentry 'Exit this GRUB' {
        	exit
        }

/var/lib/tftpboot/uefi/BOOTX64.CSV:
  file.managed:
    - user: root
    - mode: 644
    - makedirs: True
    - contents: |
        shimx64.efi,Fedora,,This is the boot entry for Fedora
        grubx64.efi,Qubes,,This is the boot entry for Qubes OS
 
/usr/local/bin/download-and-verify-latest-qubes-iso:
  file.managed:
    - user: root
    - mode: 644
    - makedirs: True
    - contents: |
        #!/bin/bash
        # Download the Qubes OS 4.1.1 ISO
        gpg2 --import /etc/pki/rpm-gpg/RPM-GPG-KEY-qubes*
        gpg2 --import /usr/share/qubes/qubes-master-ket.asc
        wget https://ftp.qubes-os.org/iso/Qubes-R4.1.1-x86_64.iso.asc -O /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.asc &
        wget https://ftp.qubes-os.org/iso/Qubes-R4.1.1-x86_64.iso -O /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso
        wget https://ftp.qubes-os.org/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS -O /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS
        md5sum -c /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS
        sha1sum -c /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS
        sha256sum -c /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS
        sha512sum -c /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.DIGESTS
        gpg2 -v --verify /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso.asc /var/lib/tftpboot/qubes/iso/Qubes-R4.1.1-x86_64.iso

/etc/systemd/system/download-and-verify-latest-qubes-iso:
  file.managed:
    - user: root
    - mode: 644
    - makedirs: True
    - contents: |
        [Unit]
        Description=Download & Verify Latest Qubes OS ISO from ftp.qubes-os.org
        [Service]
        Type=oneshot
        ExecStart=/usr/local/bin/download-and-verify-latest-qubes-iso
        RemainAfterExit=yes
        
        [Install]
        WantedBy=multi-user.target

sys-pxe.sls

# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :

##
# qvm.sys-pxe
# ===========
#

fedora-36-minimal:
  qvm.template_installed: []

#sys-pxe-template:
#  qvm.template_installed: []

{% from "qvm/template.jinja" import load -%}

{% if salt['pillar.get']('qvm:sys-pxe:name', 'sys-pxe') != salt['pillar.get']('qvm:sys-gui:name', 'sys-gui') %}

{% set vmname = salt['pillar.get']('qvm:sys-pxe:name', 'sys-pxe') %}

{% load_yaml as defaults -%}
name:          sys-pxe
qvm.present:
  - name: sys-pxe
  - label:     red
  - mem:       400
prefs:
  # Uncomment if you want additional repos via HTTP to be avaialble
  #  - netvm:     "sys-firewall"
  # Default - No access to the internet
  - netvm:     ""
  - virt_mode: hvm
  - autostart: true
  - pci_strictreset: false
  - pcidevs:   {{ salt['grains.get']('pci_net_devs', [])|yaml }}
  - class:     DispVM
  - template:  sys-pxe-template

{%- endload %}

{{ load(defaults) }}

{% else %}

{% set vmname = salt['pillar.get']('qvm:sys-gui:name', 'sys-gui') %}

{{ vmname }}-pxe:
  qvm.prefs:
    - name: {{ vmname }}
    - virt_mode: hvm
    - pcidevs:   {{ salt['grains.get']('pci_net_devs', [])|yaml }}
    - pci_strictreset: false

{% endif %}


Is there any chance anyone could have a look at them and help me fix them?

Thank you so much in advance :slight_smile:

1 Like

Hi @alzer89 - It is great to see this project coming to life in code! Awesome stuff.

Life has been hitting especially hard in these times but I did read through your latest postings and code.

I personally don’t know Salt or PXE config stuff so I might not be much help. I looked into Salt a few years back but gave up and returned to traditional scripting. I also haven’t done a PXE boot for over a decade so I will have to get up to speed on setting the client side of that up, now that I am on Coreboot ROMs (iPXE and Netboot.xyz seem relevant).

I saw your specific problem about qube cloning not working and thought it was odd that simple task wouldn’t be working but all your other salt code would be…

  1. Clone it into sys-pxe-template (this is the bit I’m having trouble with)

I would presume that the following is where you try to execute the cloning (and commented it out due to it not working yet)?..

sys-pxe-template.sls
dom0:
cmd.run:
- qvm-clone fedora-36-minimal sys-pxe-template

I experimented with the standard qvm-clone command, outside of Salt, in my Dom0 terminal and here is what I found:

qvm-clone <existing-qube-name> <new-qube-name>

generates the following output in the terminal as it runs normally:

<new-qube-name>: Cloning private volume
<new-qube-name>: Cloning root volume

But I then checked what type of output this was by routing all STDERR output to /dev/null.

qvm-clone <existing-qube-name> <new-qube-name> 2>/dev/null

And then there was no output in the terminal as it runs.

This tells me that qvm-clone is generating STDERR errors to use as normal general user output (maybe as a convenient way to hide this output from other Qubes scripts that make use of the qvm- commands).

However, maybe, just maybe, Salt has a problem with processing this STDERR output of qvm-clone and maybe interprets it as an error/failure in processing the qvm-clone command?

As a potential fix:

The qvm-clone command has an optional “--quiet” flag that seems to suppress this STDERR output.

The terminal command would be:

qvm-clone --quiet <existing-qube-name> <new-qube-name>

which you probably know how to adapt and try as a salt command.

Really hope that helps, as it otherwise seems strange such a simple command would not be working.

I’ve heard others complain about how Salt can be opaque with indicating the underlying source of errors.

I am really hoping that I can somehow use your Qubes PXE system to be usable with a current RAM-based version of Qubes, as I am successfully using @xuy’s Qubes in tmpfs, but to be able to network boot a RAM-based Qubes would be the ULTIMATE for my desired setup. I have many identical machines that I would love to simultaneously network boot a RAM-based Qubes from for testing and production uses.

1 Like

Thank you.

Hang in there, regardless of what life throws at you.

It’s a very interesting adaptation of Saltstack for Qubes OS. It took me a while to get my head around it, but once I did, I realised that it’s genius.

Salt was designed as a way to make other computers configured, set up, maintained, provisioned, fixed, or otherwise manipulated remotely. Think “IT sysadmin wants to install libreoffice on all work machines, accounting software on machines in the accounting department, change all the root passwords, install new SSH certificates, etc. on 1000+ machines remotely with a single click”, and that’ll tell you what Salt was designed to do.

In Qubes OS, the “work machines” are your VMs (and to some extent, dom0 too…).

It’s a nice way to have a kind of “assembly line” for VM creation, configuring, provisioning, and similar actions.

You should be able to build PXE boot into your Coreboot ROM.

Basically:

  1. The BIOS on the machine you want to boot (client machine) tells the Ethernet NIC to ask for an IP address
  2. sys-pxe responds with “Why hello there. Your IP address is 192.168.100.2. Do you need a PXE binary as well?”
  3. The client machine says “Oh yes please. UEFI, if you don’t mind.” (just like booting from a local drive, some NICs only support Legacy PXE boot, and sys-pxe is able to dish out both)
  4. sys-pxe serves out the GRUB menu
    • Two options. “Automated Install” and “Manual Install”
    • 10 second timer
    • Default option as “Automated Install”
  5. GRUB then loads the custom PXE xen.gz, vmlinuz and initrd.img, and then tells the kernel that the root filesystem is the Qubes ISO
  6. The client machine then loads the Qubes ISO via NFS (I’m working on an option to get the client machine to load the entire ISO into local RAM, which would take 60-90 seconds depending on Ethernet and RAM speed, so the client machine wouldn’t need to stay connected via Ethernet for the entire install, and would reduce load on sys-pxe).
  7. After this point, there is absolutely no difference between booting the Qubes ISO from sys-pxe, and booting the Qubes ISO from a USB drive.
  8. If “Automated Install” was selected (and your Kickstart file doesn’t have errors in it…), Qubes OS should immediately start the installation process, and reboot once it’s completed successfully.

To be fair, it probably is working, and it’s more likely that I have either made a mistake in the config files, or there are functions/options in Salt that I’m not aware of that better achieve what I want the Salt config files to do…

———

DISCLAIMER: I deeply apologise in advance to anyone for any potential offence caused by this next paragraph. It is not my intent to appear as ignorant or to not acknowledge the amazing work that you have done incorporating Saltstack into Qubes OS :see_no_evil:

———

From what I can tell, there is currently two Salt branches in the current Qubes OS install, the “base” branch and the “test” branch.

There is a function in the “test” branch that accounts for cloning templates (which @unman’s fantastic Salt Shaker makes really good use of, by the way, GitHub - unman/shaker ), but as of yet I haven’t been able to get that function to call successfully. Hence why I commented my hacky workaround out.

Salt is very similar to cron in that respect, in that it doesn’t usually allow any output of commands performed. In any case,

Well, I haven’t really designed this for that purpose…

…yet…:wink:

But I could see that as a potential version of this:

Bear in mind that that idea would still need A LOT of work :laughing:

Regardless of the minion OS meaning with their own commands almost everywhere Python runs, but to me it looks that the poster was like already familiar with Salt.

@alzer89

Thanks! I will. :slight_smile:

I will have to give Qubes Salt another go at some point in the future. Learning qrexec is on my list too. List is always too long and growing it seems.

Thanks for the further breakdown of the PXE boot process. That’s quite interesting the way you describe.

Proposal / Questions:

I’m looking for the easiest straight-forward way to make a current RAM-based Qubes OS that is suitable for PXE network booting, and wondering what your thoughts were?

Here is what I was thinking…

  • There is no recent Qubes Live version available and am guessing the Qubes Builder won’t successfully build this anymore (but have not verified this). An old Kickstart file for Qubes Live is there somewhere in the source.

  • But we now have @xuy’s Qubes in tmpfs working as a RAM-based Qubes OS.

  • It appears that PXE maybe needs an .ISO (iso9660) formatted image to load?

  • So I wonder if there is way to take an existing Qubes installation from a drive and write it into a .ISO (iso9660) formatted file, which could be used for PXE network booting, instead of using the standard Qubes Installer ISO?

  • If we could convert an existing Qubes installation from drive to .ISO that’s PXE bootable, then it seems this Qubes installed image could have been customized however one wants and include xuy’s Qubes in tmpfs modifications so it can boot Dom0 into RAM.

What do you think of the viability for this approach for RAM-based Qubes PXE network booting?

If possible, would all that be needed is to convert an existing Qubes installation to a .ISO (iso9660) formatted file and swap it for the Qubes Installer ISO you’re using, or would the PXE configuration files need to be modified in some way as well? Anything else?

Very much appreciate all of your positive and detailed sharing of value throughout the forum, alzer89! Noticed multiple times in multiple places.

Others with knowledge on my proposal, feel free to jump in too.

Really looking for any viable way to network boot a RAM-based Qubes, as this would be a truly killer feature for certain setups & workflows (not Qubes Air stuff at this point for me, but that’s a good idea too).

Thanks!

What are you going to do about all the VMs and templates?

You’re basically describing PXE booting the TAILS ISO…

Not entirely accurate. The Qubes builder does do this, but you have to do a little tweaking, and get on the stable branch of everything…

Also not entirely true. You can boot from almost anything. HTTPS, HTTP, NFS, CIFS, and more. You just need to specify it.

Unnecessary.

It’s insane from a security point of view, unless you own and completely control every single piece of network infrastructure between your machine and the boot drive.

It would also not really scale very well if you wanted more than one machine to use the same Qubes OS install…

@qstateless, what workflows? I’m genuinely curious…

Remember that:

  • NFS shares are unencrypted by default
  • You’re at the mercy of your network bandwidth and latency
  • Having more than one client boot from the same network boot drive with persistence simultaneously usually causes the client machines to destroy it (logs, config files, different dates and times in the files, etc.)
  • The serving machine of this boot drive is the single point of failure/pwnage in all of this

I mean, yeah, there’s probably a use case I haven’t thought of, but what is it? :joy:

I still haven’t figured out how and where to put the config to tell firstboot to use the second kickstart config. I will report back once I have figured it out.

1 Like

Hi @renehoj

I would probably update/config the Templates centrally and rebake them into a new OS image regularly to PXE network boot.

I would store the VM files on a different local network file server, where each client computer could access its own unique Qubes VM stuff, depending upon credentials used, etc.

Should all be doable with similar tactics I’ve contorted Qubes with before.

@alzer89

Except that Qubes is far beyond Tails with its security isolation measures, which I rely on.

I was aware that the Qubes Builder had a Live configuration, but didn’t think it would be easy to get it working. Awesome if it is easy.

Pardon me, I probably stated this the wrong way. I know PXE can boot from these protocols. However, I was wondering about the loading of the OS image by the PXE boot process… Does the OS need to be remotely loaded from an .ISO (iso9660) formatted image? For example, you seem to use the Qubes Installation .ISO for your PXE setup. Alternatively, could I network boot an existing Qubes OS installed on a drive thin pool, for example? If the PXE process needs a .ISO to boot from, then this is why I am interested in converting an existing Qubes OS into an .ISO (iso9660) formatted image.

No security problems. I own, control, and tightly secure all hardware on my LAN, including networking hardware.

Why not? I’m doing it now with drive cloning and it is a total PITA. Managing one central Qubes OS image and network booting several computers from it, seems like it will be a breath of fresh air from dealing with drive cloning.

Two that I can think of for myself.

  1. My job legally requires I work with files that need total containment. DispVMs are not always strong enough. Sometimes, I often need to use a dedicated machine where I’ve just clean installed Qubes OS and wipe clean afterwards. I currently maintain a central Qubes OS installation on a drive and do regular drive cloning (which takes hours over and over again). Getting rid of the OS drives in the client computers and network booting Qubes OS instead would save me a lot of time & effort regularly.

  2. I’d like to run myself and my family closer towards a stateless setup. There are several computers in our home running Qubes OS, but it is an IT nightmare to maintain them all and keep the internet’s crap out of them. Instead, I’d rather have a central Qubes OS image that is network booted by all PCs in the home and store all personal files/state on a local file server, where individual qubes have different default permissions, etc to different parts of the file server. No storage drives would be kept in the personal/family computers. This way, a reboot more deeply ensures a clean state, without needing to do any drive cloning or reinstalls. I could just maintain as little as one central Qubes OS image this way.

That’s ok. I can use something else encrypted, or just leave unencrypted, as I’d use a dedicated/isolated LAN exclusively for PXE booting and nothing else.

Yes, I haven’t tested, but I’ve put up with USB2.0 drive booting before so, hopefully Gigabit LAN would be acceptable for my purposes.

I guess I don’t understand this. I was assuming a RAM-based Qubes would allow for network booting from some type of read-only source, like a .ISO. Like network booting the Tails ISO from multiple client computers. Not sure how something gets destroyed by the client machines.

Yes, I would be sure to have the Qubes OS image on an offline segmented LAN-only Qubes OS box who’s only job is to be the PXE/OS server on this isolated LAN to my other client computers.

Thanks @alzer89! :slight_smile:

Actually, the templates you could do like this, given that they would be mounted read-only inside AppVMs on the client machines.

I was about to that that this would likely be the easiest and most straightforward way for you to achieve what you want.

It would get particularly messy if you wanted multiple clients to be able to interact with the same AppVMs simultaneously, and there would likely be a lot of unforeseen implosions and the like that I wouldn’t be able to anticipate at this point…

100% correct.

But things start to get a little restrictive if you want to boot a read-only ISO. You basically either load the entire thing into RAM, turning your RAM into a boot drive; or you load it into RAM of a network server and “transmit” the parts the client machine needs on demand…

The 3.1 version of Qubes had a LiveUSB ISO, but it ended up being paravirtualised, making it potentially problematic to rely on, especially if you regularly used it on a variety of hardware configurations…

In all honesty, it would be easier to install a fresh Qubes OS to a RAW disk image (like the kind that QEMU uses) and use that file as the boot drive.

Hahaha. No, it doesn’t :slight_smile:

I just used the ISO image because it achieved what I was trying to achieve, which was create a way to “deploy” an automated Qubes OS install.

Basically, PXE boot needs to load the xen/kernel/initrd first, and then root= states where to find the OS drive.

So that parameter on most current Qubes OS installs isroot=/dev/sda3, root=/dev/nvme0n1p3 or root=UUID=s0m3-r4nd0m-uu1d.

But you could easily change it to root=nfs:192.168.69.69:/nfs-exports/laptops/wife-laptop if you wanted to :slight_smile:

Or even root=nfs:awesome-qubes-os-share.example.com:/ninja-turtles/michelangelo/cowabunga-dude if you were game enough to do it over WAN :grimacing:
(with encryption, obviously…)

Damn straight you can :wink:

I know. I was just thinking ahead in case you did get pwned. Like, if something nasty got into your network, what they could and couldn’t see, what they could and couldn’t do, and how you could protect yourself…

Because You’d need to make sure that each of the client machines wouldn’t be running duplicate instances of the same daemons, messing with .so files, system logs, file permissions, and the like.

You can fix this by having a separate image/drive for each machine, but that means you’re just moving the hard drive to another computer…

PXE can also only serve one machine at a time, so there would be a long queue. Thankfully NFS doesn’t have this bottleneck, otherwise it would be a nightmare…

Fair point. I hadn’t envisaged that as a use case… :upside_down_face:

Well this can already be done with PXE booting a unique disk image for each machine (or restricting the disk image to one machine at a time)…

Mind you, this wouldn’t really provide any “security” benefits over having the drive locally, other than sounding really cool to your boss :sunglasses:
It would be more of a convenience for you.

However:

  • Your Qubes OS install would be tied to another machine via network
    • I have no idea how sys-net would react to having the machine booted via PXE, and then having the same NIC passed into a VM…
    • It would probably be similar to booting Qubes OS from a USB drive, and then opening sys-usb: OS implosion :crazy_face:
  • Your NIC would continually be inside dom0 in some way, shape or form (your dom0 would need to come from somewhere, right…?)
    • Depending on how your PXE firmware was loaded, you may have a gaping hole in your Qubes OS install
    • I’d be curious to try it, though :slight_smile:

Tell me about it. I’ve resorted to KVM, and even that is hard to automate…

The thing is that PXE booting’s best use case is to load read-only boot images, such as installers, rescue ISOs, and anything else that isn’t meant to have any meaningful work done in them. Once you start throwing in persistence, it gets a little…messy…

But there definitely is a way :slight_smile:

Well, dom0 could be a read-only ISO, and the VMs could be NFS/CIFS/SSHFS (or SSHFS OVER TOR *mind blown*) shares. That would work, and allow the dom0 to load into RAM…

The only thing is, this would require every single one of your machines to constantly have an ethernet cable plugged into it. If you’re ok with that, then let’s make this happen :slight_smile:

NO! :rage:

  • What if a family member brings home a friend who connects their IoT device to the wifi, and it starts running wireshark on your LAN?
  • What if your partner gets you a Google Nest or Amazon Alexa and plugs that in, and it starts poking and prodding everything, including your Qubes shares?
  • Some other cross-contamination example of someone with good intentions inadvertently pwning you

Encrypt it, please :upside_down_face:

Then you’d need a secondary LAN for internet access (and I’m sure you’ve got a plan for that :wink: )

I’m currently restoring a Compaq Evo N150c with Gentoo, and the thing takes 3 minutes to fully boot (and the record for a complete world update was SIX WEEKS).

I feel your pain…

Well, that was probably what @renehoj was asking about. The configurations for the VMs will have to come from somewhere. Will you bake them into the ISOs (meaning adding and removing VMs would be impossible), or would you allow this to change, bearing in mind that if one client changed it, then those changes would come up on the next reboot to any other client machine that PXE booted?

For example, your family members could purge your work VMs, and you wouldn’t be able to stop them…

Well, feel free to play around with sys-pxe. It would serve that purpose quite well, while still allowing that machine to remain “uasable”. :slight_smile:

Thanks for your sharing your expertise, @alzer89! :slight_smile:

Agreed. Separate client computers would have their own separate AppVMs.

I think loading the entire thing into the client computer’s RAM would be best. That’s what I’m currently doing with Qubes in tmpfs. Hopefully, I could then even cutoff the network connection to the PXE server once Qubes Dom0 is loaded into the client computer’s RAM.

Yeah, I’m aware of the old Qubes Live 3.1 ISO, and wouldn’t want to rely upon that as you say.

Wow, that is awesome. For some reason, I falsely associated PXE booting with use of ISOs, but this flexibility of where to load the OS from is awesome.

I see. If PXE is limited to serve one client computer at a time, then I would just try duplicating the PXE software setup many times on the PXE server, running each PXE server instance in a separate qube with a different LAN IP on the network. So PXE-1 serves Client-1, PXE-2 serves Client-2, PXE-3 serves Client-3, etc.

Yes.

I think there are some subtle security benefits, although maybe not so important to most people.

The primary feature gained can be lack of any OS state saved if using a client-RAM-based OS image.

Benefits:

  • Prevents drive firmware attacks / drive firmware persistence on the client computers. Reduces the storage drives footprint down to my PXE server and my file server.

  • Enables greater anti-forensics on the client computers (can be important even for common situations like civil lawsuits where everyday lawyers try to subpoena your drives, or hackers plant stuff on them remotely and SWAT you, etc). Less drives to manage and worry about as potential liabilities with something nasty sitting on them that I can’t conveniently audit. Reduces the storage drives footprint down to my PXE server and my file server, rather than also having dozens of endpoint computers containing storage drives.

  • Adds convenience of quickly wiping to a known clean OS state on client computers, which can then easily be done many times per day, compared to a major pain that doesn’t hardly ever get done otherwise. Like having DispVM capability for the entire Qubes OS running on your machine. Just reboot to a fresh read-only instance of RAM-based Qubes OS provided from the PXE server. 99.9% Trustworthy as clean. Would never reinstall Qubes OS multiple times per day like this to a known clean state, but network booting could make this easy.

Huge convenience, yes! :slight_smile:

Having 2 or 3 network cards per client computer might be necessary, where one NIC is sacrificed for the PXE network boot of the Qubes OS, which is okay for me.

However, if a RAM-based Qubes is entirely loaded into the client’s RAM first, before Xen/Qubes Dom0 boots up, then maybe the connection to the PXE server can be cutoff once the Qubes OS is loaded into client RAM and the NIC could be used normally by Qubes in a sys-net? It appears with Qubes in tmpfs, that the Dom0 is loaded into RAM like this before booting Xen/Qubes Dom0, so maybe it wouldn’t need any connection/resources/etc tied back to the PXE server once loaded and booted entirely into the client computer’s RAM?

Yes! I am game with you for achieving this setup! :smiley: I have several duplicate machines ready to go for pursuing this setup.

Yes, I will be looking to encrypt by default. Thanks for the reminder. I also use physical network segmentation in addition, because I don’t like mixing cross-purpose or cross-security-domain stuff into the same physical network.

Yes, I have several different independently segmented networks setup here, and multiple physical NICs on separate sys-nets per Qubes machine.

Wow, 6 weeks. That’s pain. :astonished:

Presently, with Qubes in tmpfs, I have made custom scripts that restore persistent AppVM configuration and personal file access, upon startup of Dom0 and AppVMs.

If need be, I would just try duplicating the PXE software setup many times on the PXE server, running each PXE server instance in a separate qube with a different LAN IP on the network. So PXE-1 serves Client-1, PXE-2 serves Client-2, PXE-3 serves Client-3, etc.

Also, I could have the PXE server write-protect or, or if need be, just overwrite the Qubes OS install regularly.

Through a combination of tactics, it seems feasible to create a RAM-based Qubes OS PXE boot setup, for multiple client computers, that does not have to change with each reboot.

Nooooo way. Wouldn’t let that happen. Their machines can’t touch the file server shares my work VM contents is stored upon. Physically isolated network. Also, their own AppVMs would have separate credentials (per PC and per VM) to access different shares on the file server, depending upon who’s specific machine and what specific AppVM is accessing the file server.


@alzer89, what do you think is the best way to make a RAM-based Qubes OS ready to use in sys-pxe?

Maybe this would work…

    1. Install Qubes OS normally onto a drive or RAW image.
    1. Add Qubes in tmpfs modifications to that installation.
    1. Copy that drive or RAW image to the sys-pxe server.
    1. Point the PXE to this volume or RAW image file containing Qubes in tmpfs.

Then expect the client computer to boot Qubes in tmpfs and run Dom0 from the client computer’s RAM?

Hahaha. No no no no no. My explanations are becoming more and more abiguous. My bad :stuck_out_tongue_closed_eyes:

I meant the initial loading of the xen.gz, vmlinuz and initrd.img files. The PXE server seems to not be able to serve those files in parallel to multiple machines simultaneously.

Once you serve the ISO over another protocol (I used NFS), there’s no bottleneck whatsoever :slight_smile:

At least, that’s what I encountered in my testing of getting 6 legacy boot laptops and 6 UEFI boot laptops to PXE boot the Qubes ISO simultaneously…

Maybe there’s a config I haven’t set up that allows this…maybe…:thinking:

I would respectfully argue that these attacks are just moved to a different place… :face_with_raised_eyebrow:

Again, you’d still have the same drive requirements, but they would be in a different place. Instead of being locally connected to the target machine, they’d be inside another machine.

This means you’d have to accommodate for TWO attack vectors:

  • Firmware attacks on TWO devices:
    • The drive that is hosting your Qubes OS boot image, as well as the actual machine that this drive is connected to
    • Your network stack on both your client and server, as well as any intermediary network infrastructure

Many would argue that this is more cumbersome than connecting a physical drive to a SATA/NVME/PCI slot on a motherboard, but it appears tht this approach better suits your use case.

Just be sure to factor all of this in when you use it :slight_smile:

Ok, I’ll give you that one :stuck_out_tongue:

Another good point :stuck_out_tongue:

Can be a blessing and a curse simultaneously, but I’ll give you that one too :wink:

Assuming the “known good state” Qubes OS install that you’re booting from stays in “mint condition”, but yes, I’ll give you that one too :sunglasses:

Was about to say this, but you beat me to it :laughing:

There is potential for VLANs to alleviate some of these issues, allowing you to use a single NIC, but I’d need to investigate a lot deeper into this than I already have…

I don’t want to give you any wrong information :laughing:

That would definitely be possible, but if you were to do that to a current Qubes OS install without any modifications, do you realise how much RAM you’d actually need…? :astonished:

The dom0 partition in the current standard Qubes OS install is at least 24GB.
That means:

  • You’d lose at least 24GB of the client’s RAM to a dom0 ramdisk
    • This would only really be viable on server motherboards with a billion and one RAM slots (mad respect if you have such a motherboard, I’m super jealous!)
    • Your options for doing this are basically including a complete dom0 in the initrd.img, essentially making the initramfs 25GB, which would likely take 30-90+ minutes to serve via PXE
  • Copying dom0 into RAM via NFS would be quicker, but would require a fair amount of “hacky shenanigans” to accomplish
    • Most of the work has been done with what you’ve already tinkered with regarding running Qubes OS entirely in RAM

Excellent.

Ok, so then that’s a potential methodology for PXE.

You’re starting to get the hang of PXE. Nice :slight_smile:

So then I guess you’ve decided that each machine gets their own Qubes OS install on the server :slight_smile:

Off the top of my head, I would say:

  1. PXE serve xen.gz, vmlinuz, and initrd.img
  • Have the initrd.img contain a complete read-only dom0

OR

  • Have a dom0 served via NFS as an image file once the initrd.img was served via PXE, allowing faster boot times of client machines, particularly if multiple clients are booting simultaneously
  1. Have Templates served via NFS
  • Allows on-demand loading/unloading of templates and AppVMs
  • Templates can be loaded by multiple clients simultaneously, provided they are read-only, avoiding clients “fighting over control of the templates”
  1. Serve AppVMs (or at least configuration files for AppVMs) via NFS
  • Allows each client to have their own persistent configuration
  1. Serve some AppVMs via VNC
  • Allows some AppVMs to run on the server
  • Can be useful when running Qubes OS on a very underpowered client, and you need those extra system resources
  1. Serve a user database via LDAP or PAM
  • Would allow all clients to boot from the same Qubes install, log in with their unique username and password, and have all their configs and custom AppVMs load on any client machine
  • Optional (and is likely not as straightforward as I am describing it, so don’t get your hopes up just yet :stuck_out_tongue:)

But in any case, you are definitely onto something, and I propose forking this thread to better facilitate this endeavour :slight_smile:

@alzer89 this has been forked to a sub-topic thread here:

RAM-based Qubes OS over PXE Network Boot

Thanks!