1.0 Wizzards/Gui
1.1 Gui for internal network of qubes with virtual switches, routing and so on.
openvswitch is your friend and one needs to look for a softswitch gui in python
1.1.1 DHCP&TFTP server configurable for install of guest systems using PXE method.
Cool: PXE boot a disposable VM like tails, or some install like OpenbSD
Support for PXE menue art and orgy
1.1.2 VPN wizzard embedded with openvswitch/dhcp/tftpd
1.1x executive summary: have a virtual raspi pi to control a virtual vlan switch
2.0 Grub and Uncle XEN
have xen configurable using grub. Select different xen hypervisors with different features (nested vtd/hypervisors (proxmox, genymotion guests) / not so strict level 1 to n, so old CPUs can be used, )
boot parameter wizzard in sub menue
2.1 Grub: ship memtest86, pci bus scanner, kali RAMdisk-edition to clean up smoking debris
2.2 Airport mode: Default if you dont enter a password or hit a special configurable key at boot time (bios reads track0, head0, sector0).
Grub boots qubes with airport mode args as default so you see a windows you provide and have sys-usb and sys-net “debugging” it the clandestine way.
3.0 Boot up
3.0 Boot up: disk mantra: ship screen keyboard that moves buttons every time so one can mix keyboard and screen touch/mouse for mantra entry so a kbd sniffer does not get it all.
3.1 airport mode: boot up a native VM with windows XP, Windows 8 and hide Qubes, sys-usb sniffs everything, also sys-net is promiscious and nosy. Maybe, we find interesting blobs.
2&3 Executive summary: make bootstrapping great again.
forensics mode for qubes manager or a special “forensics console”
4.1 freezing a running qube on button press
4.2 dumping RAM and storage of frozen VM into tarball
4.3 sniff traffic at VM and dump into pcap file named qubename_traffic_starttime_stoptime_interfacename.pcap. This sniffing is started and stopped by pressing a button in qubes-manager or “forensics console”
All forensics “evidence” is stored into dom0 into a forensics folder
I agree, I went through the process of building a custom kernel recently in a Debian VM and while it wasn’t traumatizing getting to to be available in the Qubes Manager dialog box, it wasn’t streamlined either
If there was some interest in adding this, I think the most reasonable way to do it initially might be as an experimental feature that accepts the .deb files containing the kernel and modules as inputs. Then automatically handle unpacking them to the right places and generating the initramfs
There is some documentation on how to do this manually, but it’s not in step-by-step format
I think the best way to do this would be to use pvgrub2-pvh, but have dom0 be able to select a boot menu entry for the guest to use. This avoids dom0 needing to parse the guest-provided kernel image in order to load it. Guests would also be able to advertise boot menu options to appear in the UI.
I defer to your judgement there. Now that I’ve done it, I can see a few reasons why it’s probably not ideal for most users to do it the way I did. And probably more complex than it needs to be to implement
There are some benefits to doing it at the dom0 level, though, if I understand how the alternative works
The kernel, LKMs and initramfs are immutable to the VM, I think, as they’re blobs in dom0 (may be incorrect)
It can be quicker and more natural* to use for certain use-cases, like if you have a large number of VMs and want to be able to switch back and forth without modifying the VM (subjective, depends on your competence with either method)
I can’t define “more natural”, so don’t ask
I liked not having to install a package or any files in the VMs, just clicky clicky and now it boots grsecurity
It also helped with my irrational fear of installing/reinstalling grub
Hey yall! Maybe I’m just craving a cigarette but i think it would be cool to have a more… stylish/ sleek Qubes Manager… kind of like the side bar on Ubuntu… or maybe across the bottom like Debian. Just a thought Thoughts/Concerns/Complaints?
Another: Currently if a template is updated, the menu showing running qubes (the one most people use to shut them down or restart them) does not show that a disposable, based on an appvm, that is based on the template, should be restarted, but does when it’s an appvm based on the template. The Qubes manager GUI will show this, but apparently being at two removes from the templatevm is too much for the menu.
I was just working on the Qube Manager code right now (such a coincidence). Actually it appears to be intentional. And it kinda makes sense? Even the restart icon is also disabled for disposable. And the reason is if you restart the disposable, you will naturally lose everything inside of it. So they decided to avoid showing The qube must be restarted... icon for disposables.
I can try, but yes, will need to accept your help with cleaning it up and filling in steps.
For example, I didn’t document how to “patch kernel source, configure and build kernel deb files in Debian VM”, because it’s something I’m already familiar with
I’m a little hesitant, though, because I didn’t do it the “right” way, and it’s possible to break your system if not careful. One of the side effects of the way I did it is that the kernel will be available in the system grub, as the default kernel by default. So you need to be careful to adjust grub and/or pay attention after a reboot
Basically, I did something like this - again, not recommending anyone do it this way:
Patched, configured and built a kernel in a Debian vm, into into deb packages
Manually extracted the .deb
Copied the extracted files (modules and kernel, config, System.map for good measure) into dom0
Everything except the modules goes into /boot
The modules go (iirc) in the usual installed module location under /lib/modules, use the existing structure as a reference
Used dracut to generate an initramfs for the new kernel in dom0, it will be in /boot with other dom0 kernels
Maybe copied the initramfs to a special Qubes path, need to check
Used qubes-prepare-vm-kernel command, which builds a blob with the initramfs, modules and image in it
There’s some documentation here that talks about a more correct way to do it, but it seemed to me more geared towards using kernels based on Fedora patched kernels rather than those from upstream.
Those docs also used DKMS, which is black magic to me, so I stuck to my clunky old dinosaur method, knowing that I could build modules, the image and initramfs on my own, put them in the right place, and let qubes-prepare-vm-kernel do the rest
I used this as a quick reference, to see how the existing ones were set up:
$ find /var/lib/qubes/vm-kernels -ls
Are you sure you would like to document this method?
No, I ask you to document it as a community guide if you think it’s worth for other. Just add a warning that it’s risky for the Qubes OS installation, it’s up to the reader judgment to use it or not.
I don’t have a need for to inject a custom kernel, but I think it can be useful as it’s really not easy to figure how it works and you already went through this
There are two changes that I would like to see with Qubes OS:
OpenBSD used at critical points in the system, particularly things like the network and the firewall. I understand that this can be done manually, but for people like me it would have to come shrink-wrapped, so that it was automatically set up during the install process.
When Qubes OS is installed, that the user is provided with three threat levels. Highest threat level means that the default install locks down the system really tightly. Lowest threat level (where I am) means that by default USB Wi-Fi is enabled. So far, it’s the only version of Linux where you have to manually start the USB Wi-Fi, and restart it following an update.