Context
I need to learn some stuff about Proxmox, as I like to do everything with QubesOS, the purpose of this post, is to try to make it work to run VMs in Proxmox in QubesOS.
As I couldn’t find much information on internet about virtualization in a VM of QubesOS, as some are outdated and does not work anymore. And as Information about Proxmox in QubesOS is almost nonexistent. In this post, you will see every step I did and try so you can do it too. What worked so far for me and what didn’t as of now to be able to run VMs in Proxmox in QubesOS.
(Yes doing that is not stable, and crompromise security)
This post is about :
- Create a new Qubes/VM for Proxmox
- Enable virtualization for Proxmox Qubes/VM in QubesOS/Xen
- Install Proxmox on that Qubes/VM
- Access the Web Interface of Proxmox from another VM/Qubes with a Browser.
Can’t make it work for now
- Run a VM in proxmox that will not crash
Versions as of now (25-03-2022)
Qubes release 4.1.2 (R4.1)
Xen: 4.14.5
Kernel: 5.15.94-1
Proxmox: 7.4-1
How everything works ?
QubesOS is based on Xen
Xen allow to enable virtualization inside a VM/Qubes.
To be able to use virtualization on a VM/Qubes, everything is about libvirt and his config file
In order of increasing precedence: the main template, from which the config is generated is
/usr/share/qubes/templates/libvirt/xen.xml
). The distributor may put a file at/usr/share/qubes/templates/libvirt/xen-dist.xml
) to override this file. User may put a file at either/etc/qubes/templates/libvirt/xen-user.xml
or/etc/qubes/templates/libvirt/xen/by-name/<name>.xml
, where<name>
is full name of the domain. Wildcards are not supported but symlinks are.Jinja has a concept of template names, which basically is the path below some load point, which in Qubes’ case is
/etc/qubes/templates
and/usr/share/qubes/templates
. Thus names of those templates are respectively'libvirt/xen.xml'
,'libvirt/xen-dist.xml'
,'libvirt/xen-user.xml'
and'libvirt/xen/by-name/<name>.xml'
. This will be important later.
From here
From here
Create a new Qubes/VM for Proxmox
To install Proxmox, you need to create a new template.
The settings of the template should be the following :
Kernel : (Provived by qube)
Mode : HVM
Disable ‘Include in memory balancing’
And add more Storage, RAM and VCPUs as Proxmox will not work otherwise
And let’s name this template/VM/Qubes with ProxmoxOnQubesOS
Enable virtualization for Proxmox Qubes/VM in QubesOS/Xen
The config file about libvirt are in Dom0, so open the Dom0 terminal.
First, do a backup of /usr/share/qubes/templates/libvirt/xen.xml with
cp /usr/share/qubes/templates/libvirt/xen.xml /usr/share/qubes/templates/libvirt/xen.xml.save
By default, the file /usr/share/qubes/templates/libvirt/xen.xml
will be the following
...
<cpu mode='host-passthrough'>
<!-- disable nested HVM -->
<feature name='vmx' policy='disable'/>
<feature name='svm' policy='disable'/>
{% if vm.app.host.cpu_family_model in [(6, 58), (6, 62)] -%}
<feature name='rdrand' policy='disable'/>
{% endif -%}
<!-- let the guest know the TSC is safe to use (no migration) -->
<feature name='invtsc' policy='require'/>
</cpu>
...
You need to comment out vmx and svm line, it will be like that after
...
<cpu mode='host-passthrough'>
<!-- disable nested HVM -->
<!-- <feature name='vmx' policy='disable'/>
<feature name='svm' policy='disable'/>-->
{% if vm.app.host.cpu_family_model in [(6, 58), (6, 62)] -%}
<feature name='rdrand' policy='disable'/>
{% endif -%}
<!-- let the guest know the TSC is safe to use (no migration) -->
<feature name='invtsc' policy='require'/>
</cpu>
...
By default, the directory are not created (‘by-name’ is NOT a variable, do NOT change it) :
mkdir -p /etc/qubes/templates/libvirt/xen/by-name/
Create the following file :
touch /etc/qubes/templates/libvirt/xen/by-name/NameVM.xml
If the name of your template/VM/Qubes on QubeOS that will run Proxmox is ProxmoxOnQubesOS then :
touch /etc/qubes/templates/libvirt/xen/by-name/ProxmoxOnQubesOS.xml
Edit the file with :
nano /etc/qubes/templates/libvirt/xen/by-name/ProxmoxOnQubesOS.xml
{% extends 'libvirt/xen.xml' %}
{% block cpu %}
<cpu mode='host-passthrough'>
<feature name='vmx' policy='require'/>
<feature name='svm' policy='require'/>
<!-- disable SMAP inside VM, because of Linux bug -->
<feature name='smap' policy='disable'/>
</cpu>
{% endblock %}
{% block features %}
<pae/>
<acpi/>
<apic/>
<viridian/>
<hap/> <!-- enable Hardware Assisted Paging -->
<!-- <nestedvm/> -->
{% endblock %}
How to check if it worked ? you can see it with :
wirsh dumpxml ProxmoxOnQubesOS
For reference you can see mine if necessary
<domain type='xen' id='39'>
<name>ProxmoxOnQubesOS</name>
<uuid>.........</uuid>
<memory unit='KiB'>10240000</memory>
<currentMemory unit='KiB'>10240000</currentMemory>
<vcpu placement='static'>10</vcpu>
<os>
<type arch='x86_64' machine='xenfv'>hvm</type>
<loader type='rom'>hvmloader</loader>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
<hap state='on'/>
<viridian/>
</features>
<cpu mode='host-passthrough'>
<feature policy='require' name='vmx'/>
<feature policy='require' name='svm'/>
<feature policy='disable' name='smap'/>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator type='stubdom-linux' cmdline='-qubes-net:client_ip=10.XXX.XX.XX,dns_0=10.XXX.XX.XX,dns_1=10.XXX.XX.XX,gw=10.XXX.XX.XX,netmask=255.255.255.255'/>
<disk type='block' device='disk'>
<driver name='phy' type='raw'/>
<source dev='/dev/qubes_dom0/vm-proxmoxo-root-snap'/>
<target dev='xvda' bus='xen'/>
</disk>
<disk type='block' device='disk'>
<driver name='phy' type='raw'/>
<source dev='/dev/qubes_dom0/vm-proxmoxo-private-snap'/>
<target dev='xvdb' bus='xen'/>
</disk>
<disk type='block' device='disk'>
<driver name='phy' type='raw'/>
<source dev='/dev/qubes_dom0/vm-proxmoxo-volatile'/>
<target dev='xvdc' bus='xen'/>
</disk>
<controller type='xenbus' index='0'/>
<interface type='ethernet'>
<mac address='00:16:3e:5e:6c:00'/>
<ip address='10.XXX.XX.XX' family='ipv4'/>
<script path='vif-route-qubes'/>
<backenddomain name='sys-firewall'/>
<target dev='vif39.0-emu'/>
</interface>
<console type='pty' tty='/dev/pts/13'>
<source path='/dev/pts/13'/>
<target type='xen' port='0'/>
</console>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='qubes' log_level='0'/>
<video>
<model type='vga' vram='16384' heads='1' primary='yes'/>
</video>
<memballoon model='xen'/>
</devices>
</domain>
What is vmx and svm ?
Svm: AVM-V support information
Vmx: Intel-VT technology support information
Enable or require ?
I saw in some post that they used enable instead of require but that didn’t worked for me using enable
<feature policy='enable' name='vmx'/>
<feature policy='enable' name='svm'/>
To Copie Paste this file to dom0, in the Dom0 terminal do : (More here)
qvm-run --pass-io VMwithInternet ‘cat /home/user/ProxmoxOnQubesOS.xml’ > /etc/qubes/templates/libvirt/xen/by-name/ProxmoxOnQubesOS.xml
What I saw on Internet but didn't work for me with libvirt
I saw different way to do it on Internet, one is to edit this file directly to change the value.
nano /etc/libvirt/libxl/ProxmoxOnQubesOS.xml
But you may see the following message on top of the file
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh edit ProxmoxOnQubesOS
or other application using the libvirt API.
-->
So it’s better to use the following command :
virsh edit ProxmoxOnQubesOS
But using this method will either :
- show this error
error: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Error validating value
Extra element os in interleave
Element domain failed to validate content
Failed. Try again? [y,n,i,f,?]:
- Or doesnt show any arror, will apply the modification in the file, but as soon as I start the ProxmoxOnQubesOS, the modification are not there anymore.
As I can see it withwirsh dumpxml ProxmoxOnQubesOS
orvirsh edit ProxmoxOnQubesOS
Installing Proxmox
Download lastest ISO of Proxmox.
In the settings of your ProxmoxOnQubesOs VM, click on “boot from CD-ROM” and choose the ISO.
When the ProxmoxOnQubesOS start, you shouldn’t see that
No Support for KVM virtualization detected.
Check BIOS settings for intel VT / AMD-V / SVM.
If it’s the case then something is wrong with libvirt and his config file.
If you don’t then you can keep going the installation, It will ask several things, including some network information (IP address, netmask …)
You have to put the one that was generated by QubesOS. To know that you just can see in the settings of your Qubes.
Once it’s installed, you can start it.
Network configuration
You should have another Qubes in which you will with a browser access to the Proxmox web interface. That other Qubes should be connected to the same net Qubes that the Proxmox one, for exemple here sys-firewall.
if it’s sys-firewall, then in sys-firewall, terminal :
sudo iptables -I FORWARD 2 -s <IP address of A> -d <IP address of B> -j ACCEPT
sudo iptables -I FORWARD 2 -s IP_VM_browser -d IP_of_ProxmoxOnQubesOS -j ACCEPT
More here
Once it’s done, go in the browser and your Proxmox web acces
https://ipOfYourProxmoxOnQubesOS:8006/
Check if virtualization if enable in Proxmox
Method 1 : In ProxmoxOnQubesOS terminal you can do (more here) :
lscpu|grep Virtualization
egrep “svm|vmx” /proc/cpuinfo
Method 2 : orr you can do this (more here)
- -for Intel --
$ cat /proc/cpuinfo | grep -c vmx
64
-- or for AMD --
$ cat /proc/cpuinfo | grep -c svm
16
Method 3 : Or you can do that (from here) :
sudo apt update
sudo apt-get install cpu-checker
sudo kvm-ok
You should have that :
INFO: /dev/kvm exists
KVM acceleration can be used
Method 4 : Saw it from this post
In Proxmox terminal I checked and it was already on by for me
cat /sys/module/kvm_intel/parameters/nested
Y
Method 5 : Or you can do that (from here)
sudo apt update
sudo apt install libvirt-clients
virt-host-validate
But I have the following error :
QEMU: Checking for device assignment IOMMU : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
QEMU: Checking for secure guest : WARN (Unknown if this platform has Secure Guest support)
LXC: Checking for cgroup 'freezer' controller : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
Full output
root@ProxmoxOnQubesOS:~# virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup ‘cpu’ controller support : PASS
QEMU: Checking for cgroup ‘cpuacct’ controller support : PASS
QEMU: Checking for cgroup ‘cpuset’ controller support : PASS
QEMU: Checking for cgroup ‘memory’ controller support : PASS
QEMU: Checking for cgroup ‘devices’ controller support : PASS
QEMU: Checking for cgroup ‘blkio’ controller support : PASS
QEMU: Checking for device assignment IOMMU support : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup ‘cpu’ controller support : PASS
LXC: Checking for cgroup ‘cpuacct’ controller support : PASS
LXC: Checking for cgroup ‘cpuset’ controller support : PASS
LXC: Checking for cgroup ‘memory’ controller support : PASS
LXC: Checking for cgroup ‘devices’ controller support : PASS
LXC: Checking for cgroup ‘freezer’ controller support : FAIL (Enable ‘freezer’ in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup ‘blkio’ controller support : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : PASS
Method 6 (from here) :
modinfo kvm_intel | grep -i nested
you should have :
parm: nested_early_check:bool
parm: nested:bool
Link that can help ?
Proxmox VE inside VirtualBox - Proxmox VE
Proxmox Nested Virtualization Tutorial- Harvester/ESXI · GitHub
Create and use a VM in Proxmox (CAN’T MAKE IT WORK FOR NOW, NEED YOUR HELP)
As you can see, I created two VM in proxmox. One for Debian 11 and another one for Fedora 37.
For both of them, I can start them, have the install screen of Debian 11 ISO and Fedora 37 ISO.
But if press enter in the grub to install Debian 11 pr Fedora 37, the Qubes ProxmoxOnQubesOS itself, not just VM of Debian or fedora, stop suddenly several seconds after.
I even try to run Proxmox 7.4.1 inside ProxmoxOnQubesOS, but same crash.
I create Debian and Fedora with the following settings, and tried different one but nothing worked so far
For Memory settings you need to click on advanced and disable ‘Ballooning Device’
ISOs I tried for far :
Debian 11
Debian 11 minimal
Fedora 34
Proxmox 7.4-1
Ubuntu 22.10 (desktop-amd64)
Ubuntu 22.10 (core-amd64) will try
Alpine (virt-3.17.2-x86)
Alpine (standard-3.17.2-x86)
(Tried and worked with CPU type host and kvm64 for Alpine)
Could it be because of the grub settings of Debian 11 and Fedora 34 ?
Could it be because of the VM hardware configuration in Proxmox of Debian 11 and Fedora 34 ?
If you have ideas to share or information, please do
Logs and information about the crash so far
(But the logs doesnt help at all on what happen)
The CPU doesnt go up it stays around 0 and 5%
On dom0, in the folder
cat /var/log/xen/console/guest-proxmox-dm.log
nothing appear
/var/log/qubes/qrexec.proxmox.log
domain dead
2023-03-17 11:43:02.855 qrexec-daemon[57789]: qrexec-daemon.c:340:init: cannot connect to qrexec agent: No such file or directory
/var/log/qubes/guid.proxmox.log
Icon size: 128x128
domain dead
Failed to connect to gui-agent
On ProxmoxOnQubesOs
Mar 17 11:42:02 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/102: mmaping file '/var/lib/rrdcached/db/pve2-vm/102': Invalid argument
Mar 17 11:42:12 ProxmoxOnQubesOS rrdcached[819]: handle_request_update: Could not read RRD file.
Mar 17 11:42:12 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Mar 17 11:42:12 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/102: mmaping file '/var/lib/rrdcached/db/pve2-vm/102': Invalid argument
Mar 17 11:42:22 ProxmoxOnQubesOS rrdcached[819]: handle_request_update: Could not read RRD file.
Mar 17 11:42:22 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Mar 17 11:42:22 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/102: mmaping file '/var/lib/rrdcached/db/pve2-vm/102': Invalid argument
Mar 17 11:42:32 ProxmoxOnQubesOS rrdcached[819]: handle_request_update: Could not read RRD file.
Mar 17 11:42:32 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Mar 17 11:42:32 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/102: mmaping file '/var/lib/rrdcached/db/pve2-vm/102': Invalid argument
Mar 17 11:42:42 ProxmoxOnQubesOS rrdcached[819]: handle_request_update: Could not read RRD file.
Mar 17 11:42:42 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Mar 17 11:42:42 ProxmoxOnQubesOS pmxcfs[838]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/102: mmaping file '/var/lib/rrdcached/db/pve2-vm/102': Invalid argument
Mar 17 11:42:52 ProxmoxOnQubesOS rrdcached[819]: handle_request_update: Could not read RRD file.
-- Reboot --