Automated Qubes OS Installation using Kickstart and/or PXE Network Boot

What are you going to do about all the VMs and templates?

You’re basically describing PXE booting the TAILS ISO…

Not entirely accurate. The Qubes builder does do this, but you have to do a little tweaking, and get on the stable branch of everything…

Also not entirely true. You can boot from almost anything. HTTPS, HTTP, NFS, CIFS, and more. You just need to specify it.

Unnecessary.

It’s insane from a security point of view, unless you own and completely control every single piece of network infrastructure between your machine and the boot drive.

It would also not really scale very well if you wanted more than one machine to use the same Qubes OS install…

@qstateless, what workflows? I’m genuinely curious…

Remember that:

  • NFS shares are unencrypted by default
  • You’re at the mercy of your network bandwidth and latency
  • Having more than one client boot from the same network boot drive with persistence simultaneously usually causes the client machines to destroy it (logs, config files, different dates and times in the files, etc.)
  • The serving machine of this boot drive is the single point of failure/pwnage in all of this

I mean, yeah, there’s probably a use case I haven’t thought of, but what is it? :joy:

I still haven’t figured out how and where to put the config to tell firstboot to use the second kickstart config. I will report back once I have figured it out.

1 Like

Hi @renehoj

I would probably update/config the Templates centrally and rebake them into a new OS image regularly to PXE network boot.

I would store the VM files on a different local network file server, where each client computer could access its own unique Qubes VM stuff, depending upon credentials used, etc.

Should all be doable with similar tactics I’ve contorted Qubes with before.

@alzer89

Except that Qubes is far beyond Tails with its security isolation measures, which I rely on.

I was aware that the Qubes Builder had a Live configuration, but didn’t think it would be easy to get it working. Awesome if it is easy.

Pardon me, I probably stated this the wrong way. I know PXE can boot from these protocols. However, I was wondering about the loading of the OS image by the PXE boot process… Does the OS need to be remotely loaded from an .ISO (iso9660) formatted image? For example, you seem to use the Qubes Installation .ISO for your PXE setup. Alternatively, could I network boot an existing Qubes OS installed on a drive thin pool, for example? If the PXE process needs a .ISO to boot from, then this is why I am interested in converting an existing Qubes OS into an .ISO (iso9660) formatted image.

No security problems. I own, control, and tightly secure all hardware on my LAN, including networking hardware.

Why not? I’m doing it now with drive cloning and it is a total PITA. Managing one central Qubes OS image and network booting several computers from it, seems like it will be a breath of fresh air from dealing with drive cloning.

Two that I can think of for myself.

  1. My job legally requires I work with files that need total containment. DispVMs are not always strong enough. Sometimes, I often need to use a dedicated machine where I’ve just clean installed Qubes OS and wipe clean afterwards. I currently maintain a central Qubes OS installation on a drive and do regular drive cloning (which takes hours over and over again). Getting rid of the OS drives in the client computers and network booting Qubes OS instead would save me a lot of time & effort regularly.

  2. I’d like to run myself and my family closer towards a stateless setup. There are several computers in our home running Qubes OS, but it is an IT nightmare to maintain them all and keep the internet’s crap out of them. Instead, I’d rather have a central Qubes OS image that is network booted by all PCs in the home and store all personal files/state on a local file server, where individual qubes have different default permissions, etc to different parts of the file server. No storage drives would be kept in the personal/family computers. This way, a reboot more deeply ensures a clean state, without needing to do any drive cloning or reinstalls. I could just maintain as little as one central Qubes OS image this way.

That’s ok. I can use something else encrypted, or just leave unencrypted, as I’d use a dedicated/isolated LAN exclusively for PXE booting and nothing else.

Yes, I haven’t tested, but I’ve put up with USB2.0 drive booting before so, hopefully Gigabit LAN would be acceptable for my purposes.

I guess I don’t understand this. I was assuming a RAM-based Qubes would allow for network booting from some type of read-only source, like a .ISO. Like network booting the Tails ISO from multiple client computers. Not sure how something gets destroyed by the client machines.

Yes, I would be sure to have the Qubes OS image on an offline segmented LAN-only Qubes OS box who’s only job is to be the PXE/OS server on this isolated LAN to my other client computers.

Thanks @alzer89! :slight_smile:

Actually, the templates you could do like this, given that they would be mounted read-only inside AppVMs on the client machines.

I was about to that that this would likely be the easiest and most straightforward way for you to achieve what you want.

It would get particularly messy if you wanted multiple clients to be able to interact with the same AppVMs simultaneously, and there would likely be a lot of unforeseen implosions and the like that I wouldn’t be able to anticipate at this point…

100% correct.

But things start to get a little restrictive if you want to boot a read-only ISO. You basically either load the entire thing into RAM, turning your RAM into a boot drive; or you load it into RAM of a network server and “transmit” the parts the client machine needs on demand…

The 3.1 version of Qubes had a LiveUSB ISO, but it ended up being paravirtualised, making it potentially problematic to rely on, especially if you regularly used it on a variety of hardware configurations…

In all honesty, it would be easier to install a fresh Qubes OS to a RAW disk image (like the kind that QEMU uses) and use that file as the boot drive.

Hahaha. No, it doesn’t :slight_smile:

I just used the ISO image because it achieved what I was trying to achieve, which was create a way to “deploy” an automated Qubes OS install.

Basically, PXE boot needs to load the xen/kernel/initrd first, and then root= states where to find the OS drive.

So that parameter on most current Qubes OS installs isroot=/dev/sda3, root=/dev/nvme0n1p3 or root=UUID=s0m3-r4nd0m-uu1d.

But you could easily change it to root=nfs:192.168.69.69:/nfs-exports/laptops/wife-laptop if you wanted to :slight_smile:

Or even root=nfs:awesome-qubes-os-share.example.com:/ninja-turtles/michelangelo/cowabunga-dude if you were game enough to do it over WAN :grimacing:
(with encryption, obviously…)

Damn straight you can :wink:

I know. I was just thinking ahead in case you did get pwned. Like, if something nasty got into your network, what they could and couldn’t see, what they could and couldn’t do, and how you could protect yourself…

Because You’d need to make sure that each of the client machines wouldn’t be running duplicate instances of the same daemons, messing with .so files, system logs, file permissions, and the like.

You can fix this by having a separate image/drive for each machine, but that means you’re just moving the hard drive to another computer…

PXE can also only serve one machine at a time, so there would be a long queue. Thankfully NFS doesn’t have this bottleneck, otherwise it would be a nightmare…

Fair point. I hadn’t envisaged that as a use case… :upside_down_face:

Well this can already be done with PXE booting a unique disk image for each machine (or restricting the disk image to one machine at a time)…

Mind you, this wouldn’t really provide any “security” benefits over having the drive locally, other than sounding really cool to your boss :sunglasses:
It would be more of a convenience for you.

However:

  • Your Qubes OS install would be tied to another machine via network
    • I have no idea how sys-net would react to having the machine booted via PXE, and then having the same NIC passed into a VM…
    • It would probably be similar to booting Qubes OS from a USB drive, and then opening sys-usb: OS implosion :crazy_face:
  • Your NIC would continually be inside dom0 in some way, shape or form (your dom0 would need to come from somewhere, right…?)
    • Depending on how your PXE firmware was loaded, you may have a gaping hole in your Qubes OS install
    • I’d be curious to try it, though :slight_smile:

Tell me about it. I’ve resorted to KVM, and even that is hard to automate…

The thing is that PXE booting’s best use case is to load read-only boot images, such as installers, rescue ISOs, and anything else that isn’t meant to have any meaningful work done in them. Once you start throwing in persistence, it gets a little…messy…

But there definitely is a way :slight_smile:

Well, dom0 could be a read-only ISO, and the VMs could be NFS/CIFS/SSHFS (or SSHFS OVER TOR *mind blown*) shares. That would work, and allow the dom0 to load into RAM…

The only thing is, this would require every single one of your machines to constantly have an ethernet cable plugged into it. If you’re ok with that, then let’s make this happen :slight_smile:

NO! :rage:

  • What if a family member brings home a friend who connects their IoT device to the wifi, and it starts running wireshark on your LAN?
  • What if your partner gets you a Google Nest or Amazon Alexa and plugs that in, and it starts poking and prodding everything, including your Qubes shares?
  • Some other cross-contamination example of someone with good intentions inadvertently pwning you

Encrypt it, please :upside_down_face:

Then you’d need a secondary LAN for internet access (and I’m sure you’ve got a plan for that :wink: )

I’m currently restoring a Compaq Evo N150c with Gentoo, and the thing takes 3 minutes to fully boot (and the record for a complete world update was SIX WEEKS).

I feel your pain…

Well, that was probably what @renehoj was asking about. The configurations for the VMs will have to come from somewhere. Will you bake them into the ISOs (meaning adding and removing VMs would be impossible), or would you allow this to change, bearing in mind that if one client changed it, then those changes would come up on the next reboot to any other client machine that PXE booted?

For example, your family members could purge your work VMs, and you wouldn’t be able to stop them…

Well, feel free to play around with sys-pxe. It would serve that purpose quite well, while still allowing that machine to remain “uasable”. :slight_smile:

Thanks for your sharing your expertise, @alzer89! :slight_smile:

Agreed. Separate client computers would have their own separate AppVMs.

I think loading the entire thing into the client computer’s RAM would be best. That’s what I’m currently doing with Qubes in tmpfs. Hopefully, I could then even cutoff the network connection to the PXE server once Qubes Dom0 is loaded into the client computer’s RAM.

Yeah, I’m aware of the old Qubes Live 3.1 ISO, and wouldn’t want to rely upon that as you say.

Wow, that is awesome. For some reason, I falsely associated PXE booting with use of ISOs, but this flexibility of where to load the OS from is awesome.

I see. If PXE is limited to serve one client computer at a time, then I would just try duplicating the PXE software setup many times on the PXE server, running each PXE server instance in a separate qube with a different LAN IP on the network. So PXE-1 serves Client-1, PXE-2 serves Client-2, PXE-3 serves Client-3, etc.

Yes.

I think there are some subtle security benefits, although maybe not so important to most people.

The primary feature gained can be lack of any OS state saved if using a client-RAM-based OS image.

Benefits:

  • Prevents drive firmware attacks / drive firmware persistence on the client computers. Reduces the storage drives footprint down to my PXE server and my file server.

  • Enables greater anti-forensics on the client computers (can be important even for common situations like civil lawsuits where everyday lawyers try to subpoena your drives, or hackers plant stuff on them remotely and SWAT you, etc). Less drives to manage and worry about as potential liabilities with something nasty sitting on them that I can’t conveniently audit. Reduces the storage drives footprint down to my PXE server and my file server, rather than also having dozens of endpoint computers containing storage drives.

  • Adds convenience of quickly wiping to a known clean OS state on client computers, which can then easily be done many times per day, compared to a major pain that doesn’t hardly ever get done otherwise. Like having DispVM capability for the entire Qubes OS running on your machine. Just reboot to a fresh read-only instance of RAM-based Qubes OS provided from the PXE server. 99.9% Trustworthy as clean. Would never reinstall Qubes OS multiple times per day like this to a known clean state, but network booting could make this easy.

Huge convenience, yes! :slight_smile:

Having 2 or 3 network cards per client computer might be necessary, where one NIC is sacrificed for the PXE network boot of the Qubes OS, which is okay for me.

However, if a RAM-based Qubes is entirely loaded into the client’s RAM first, before Xen/Qubes Dom0 boots up, then maybe the connection to the PXE server can be cutoff once the Qubes OS is loaded into client RAM and the NIC could be used normally by Qubes in a sys-net? It appears with Qubes in tmpfs, that the Dom0 is loaded into RAM like this before booting Xen/Qubes Dom0, so maybe it wouldn’t need any connection/resources/etc tied back to the PXE server once loaded and booted entirely into the client computer’s RAM?

Yes! I am game with you for achieving this setup! :smiley: I have several duplicate machines ready to go for pursuing this setup.

Yes, I will be looking to encrypt by default. Thanks for the reminder. I also use physical network segmentation in addition, because I don’t like mixing cross-purpose or cross-security-domain stuff into the same physical network.

Yes, I have several different independently segmented networks setup here, and multiple physical NICs on separate sys-nets per Qubes machine.

Wow, 6 weeks. That’s pain. :astonished:

Presently, with Qubes in tmpfs, I have made custom scripts that restore persistent AppVM configuration and personal file access, upon startup of Dom0 and AppVMs.

If need be, I would just try duplicating the PXE software setup many times on the PXE server, running each PXE server instance in a separate qube with a different LAN IP on the network. So PXE-1 serves Client-1, PXE-2 serves Client-2, PXE-3 serves Client-3, etc.

Also, I could have the PXE server write-protect or, or if need be, just overwrite the Qubes OS install regularly.

Through a combination of tactics, it seems feasible to create a RAM-based Qubes OS PXE boot setup, for multiple client computers, that does not have to change with each reboot.

Nooooo way. Wouldn’t let that happen. Their machines can’t touch the file server shares my work VM contents is stored upon. Physically isolated network. Also, their own AppVMs would have separate credentials (per PC and per VM) to access different shares on the file server, depending upon who’s specific machine and what specific AppVM is accessing the file server.


@alzer89, what do you think is the best way to make a RAM-based Qubes OS ready to use in sys-pxe?

Maybe this would work…

    1. Install Qubes OS normally onto a drive or RAW image.
    1. Add Qubes in tmpfs modifications to that installation.
    1. Copy that drive or RAW image to the sys-pxe server.
    1. Point the PXE to this volume or RAW image file containing Qubes in tmpfs.

Then expect the client computer to boot Qubes in tmpfs and run Dom0 from the client computer’s RAM?

Hahaha. No no no no no. My explanations are becoming more and more abiguous. My bad :stuck_out_tongue_closed_eyes:

I meant the initial loading of the xen.gz, vmlinuz and initrd.img files. The PXE server seems to not be able to serve those files in parallel to multiple machines simultaneously.

Once you serve the ISO over another protocol (I used NFS), there’s no bottleneck whatsoever :slight_smile:

At least, that’s what I encountered in my testing of getting 6 legacy boot laptops and 6 UEFI boot laptops to PXE boot the Qubes ISO simultaneously…

Maybe there’s a config I haven’t set up that allows this…maybe…:thinking:

I would respectfully argue that these attacks are just moved to a different place… :face_with_raised_eyebrow:

Again, you’d still have the same drive requirements, but they would be in a different place. Instead of being locally connected to the target machine, they’d be inside another machine.

This means you’d have to accommodate for TWO attack vectors:

  • Firmware attacks on TWO devices:
    • The drive that is hosting your Qubes OS boot image, as well as the actual machine that this drive is connected to
    • Your network stack on both your client and server, as well as any intermediary network infrastructure

Many would argue that this is more cumbersome than connecting a physical drive to a SATA/NVME/PCI slot on a motherboard, but it appears tht this approach better suits your use case.

Just be sure to factor all of this in when you use it :slight_smile:

Ok, I’ll give you that one :stuck_out_tongue:

Another good point :stuck_out_tongue:

Can be a blessing and a curse simultaneously, but I’ll give you that one too :wink:

Assuming the “known good state” Qubes OS install that you’re booting from stays in “mint condition”, but yes, I’ll give you that one too :sunglasses:

Was about to say this, but you beat me to it :laughing:

There is potential for VLANs to alleviate some of these issues, allowing you to use a single NIC, but I’d need to investigate a lot deeper into this than I already have…

I don’t want to give you any wrong information :laughing:

That would definitely be possible, but if you were to do that to a current Qubes OS install without any modifications, do you realise how much RAM you’d actually need…? :astonished:

The dom0 partition in the current standard Qubes OS install is at least 24GB.
That means:

  • You’d lose at least 24GB of the client’s RAM to a dom0 ramdisk
    • This would only really be viable on server motherboards with a billion and one RAM slots (mad respect if you have such a motherboard, I’m super jealous!)
    • Your options for doing this are basically including a complete dom0 in the initrd.img, essentially making the initramfs 25GB, which would likely take 30-90+ minutes to serve via PXE
  • Copying dom0 into RAM via NFS would be quicker, but would require a fair amount of “hacky shenanigans” to accomplish
    • Most of the work has been done with what you’ve already tinkered with regarding running Qubes OS entirely in RAM

Excellent.

Ok, so then that’s a potential methodology for PXE.

You’re starting to get the hang of PXE. Nice :slight_smile:

So then I guess you’ve decided that each machine gets their own Qubes OS install on the server :slight_smile:

Off the top of my head, I would say:

  1. PXE serve xen.gz, vmlinuz, and initrd.img
  • Have the initrd.img contain a complete read-only dom0

OR

  • Have a dom0 served via NFS as an image file once the initrd.img was served via PXE, allowing faster boot times of client machines, particularly if multiple clients are booting simultaneously
  1. Have Templates served via NFS
  • Allows on-demand loading/unloading of templates and AppVMs
  • Templates can be loaded by multiple clients simultaneously, provided they are read-only, avoiding clients “fighting over control of the templates”
  1. Serve AppVMs (or at least configuration files for AppVMs) via NFS
  • Allows each client to have their own persistent configuration
  1. Serve some AppVMs via VNC
  • Allows some AppVMs to run on the server
  • Can be useful when running Qubes OS on a very underpowered client, and you need those extra system resources
  1. Serve a user database via LDAP or PAM
  • Would allow all clients to boot from the same Qubes install, log in with their unique username and password, and have all their configs and custom AppVMs load on any client machine
  • Optional (and is likely not as straightforward as I am describing it, so don’t get your hopes up just yet :stuck_out_tongue:)

But in any case, you are definitely onto something, and I propose forking this thread to better facilitate this endeavour :slight_smile:

@alzer89 this has been forked to a sub-topic thread here:

RAM-based Qubes OS over PXE Network Boot

Thanks!

@alzer89

How’s resolving the development issues with sys-pxe coming?

Would you like to point out the specific parts of the code that aren’t working, so that specific coding solutions could potentially be identified by others?

Maybe a quick How To tutorial on running your salt files could encourage more of us to jump in with using, testing, & contributing?

The qvm.clone function to clone the fedora-36-minimal template and rename it is the only part of it not working. All other parts seem to work.

Any help would be massively appreciated.

@alzer89 Ok, so is the following salt code within the sys-pxe-template.sls file what you are specifically using to attempt cloning and renaming the fedora-36-minimal template (but its not working for some unknown reason)?

I noticed that code is commented out…

sys-pxe-template.sls

Yes, because it failed (most likely because it’s not the appropriate way to issue a clone command).

In the testing scripts of Salt installed on every Qubes OS machine, there is a qvm.clone command, and this function would work perfectly…if I could get it to work.

That is the trouble I am having…

@alzer89

How about something like this?

qvm-clone-id:
  qvm.clone:
    - name: sys-pxe-template
    - source: fedora-36-minimal

Reference:

1 Like

I tried that, but it threw an error. My guess is that I hadn’t configured something properly.

I can see references to those functions in /srv/formulas/test, but not in base.

But yes, that should work…

Where are the salt files to be found?
How are you calling salt?

I’m still learning Salt, but I’d try this

include:
- sys-pxe-template
qvm-clone-id:
qvm.clone:
- require:
- sls: sys-pxe-template
- name: sys-pxe
- source: fedora-36-minal

sys-pxe-template.sls should be in a parent directory to this as how I understand, meaning in the /srv/salt

1 Like

They’re all in /srv/salt. Should they be somewhere else?

qubesctl top.enable sys-pxe
qubesctl —all state.highstate

Thank you @unman!

I mend, where is the source?
Are they on GitHub?

@unman: to answer your question, @alzer89’s source salt files for sys-pxe are not on GitHub, but actually inside of this web forum thread at post #8.

There are 4 source salt files, which are apparently placed in dom0’s /srv/salt

  • sys-pxe.top
  • sys-pxe-template.sls
  • sys-pxe-vm.sls
  • sys-pxe.sls

Executed with:

qubesctl top.enable sys-pxe
qubesctl —all state.highstate

I believe the not working qvm.clone command is supposed to be in the sys-pxe-template.sls file, with an alternative tested version of the command presently commented out in that provided salt file.

Thank you @unman for your assistance in solving this salt puzzle for this very interesting sys-pxe community project.

Here is a link to and quote of @alzer89’s post with the 4 original sys-pxe salt files:

https://forum.qubes-os.org/t/automated-qubes-os-installation-using-kickstart-and-or-pxe-network-boot/12963/8

Full quote of source post:

Thank you for this.
I’ll try to take a look this evening.