[solved] VMDK on-the-fly as root for an HVM - no bootable device

For my offensive security certified procrastinator exam I had to install an Ubuntu with VMWare on my machine. They don’t even let you use KVM/qemu/libvirt and in my view the whole thing is just trying to scrape outrageous amounts of money from their students (try harder, not smarter).

Anyway, wouldn’t it be nice to start your Kali-Linux VM from Qubes? So I thought and wrote this two little nifty scripts to create a block device to boot from.

[user@disp678 ~]$ cat mount.sh
#! /usr/bin/bash


sudo mkdir /mnt/cryptroot \
&& sudo cryptsetup open /dev/xvdi cryptroot \
&& sudo mount /dev/mapper/cryptroot /mnt/cryptroot \
&& sudo modprobe nbd max-part=8 \
&& sudo qemu-nbd -c /dev/nbd0 $file \
&& sudo kpartx -av /dev/nbd0

[user@disp678 ~]$ cat umount.sh 
#! /usr/bin/bash

sudo kpartx -d /dev/nbd0 \
&& sudo qemu-nbd -d /dev/nbd0 \
&& sudo umount /mnt/cryptroot \
&& sudo cryptsetup close cryptroot \
&& sudo rmdir /mnt/cryptroot \
&& sleep 3 \
&& sudo modprobe -r nbd

So, I attached the ubuntu-cryptroot to the dispVM as /dev/xvdi, ran mount.sh and fired up my offsec HVM from dom0 with

qvm-start offsec --drive=hd:disp678:/dev/nbd0p1

and seabios said NO, no bootable device.


qvm-start offsec --drive=hd:disp678:/dev/mapper/nbd0p1

with the same result:

Booting from Hard Disk...
Boot failed: not a bootable disk
No bootable device.

In the dispVM I can sudo mount (-r) /dev/mapper/nbd0p1 /mnt, sudo ls -al /mnt/boot (i.e.) and sudo umount /mnt successfully.

Is there a emergency console in seabios?
If yes, how can it be invoked?
Anyone an idea how to debug this further?

This “no bootable device” error has been reported before and dealt with by giving the VM an exactly 10Gbytes sized .raw image. However block devices of all sizes shouldn’t be a problem or the problem needs to be fixed.

This is rather unpleasant and I hope someone with more profound knowledge of QubesOS’ internals could help debug the problem.

1 Like

so I took a look at:

and at the end of this python2 script there is a hint to read


which I did. In dom0 I also inspected the actually used python3 variant used in QubesOS v4.1 with

less /usr/bin/qvm-start
less /usr/lib/python3.8/site-packages/qubesadmin/tools/qvm_start.py

which hints to start the HVM like this

qvm-start offsec --drive=hd:disp678:nbd0p1
qvm-start offsec --drive=hd:disp678:loop0


qvm-start offsec --drive=hd:disp678:dm-1

The first two ways throw errors in the journal (AttributeError: 'UnknownDevice' object has no attribute 'mode') the last one gives me a

Booting from Hard Disk...
Boot failed: not a bootable disk
No bootable device.


I’m still suspecting qubes-internals to mess up here, i.e. the

<emulator type='stubdom-linux' [and stuff]

in /etc/libvirt/libxl/offsec.xml is one kind of a special qemu-emulator.

Did you try --hddisk instead of --drive?

Yes, with the same outcome. That option is equivalent judging by the python3 source code (/usr/lib/python3.8/site-packages/qubesadmin/tools/qvm_start.py). The

Booting from Hard Disk...
Boot failed: not a bootable disk
No bootable device.

error is a know issue and has been addressed in this forum a couple of times. Obviously if you copy or move an image to VM:root these have to have the exact same size. However, it is pretty counterintuitive that one is limited to 10 GB (1073741824 bytes). And if you move or copy something in bash the target is automatically going to have the same size as its source.

Now, if you start the VM from a different device, the size of the root-volume shouldn’t matter at all.

I suspect the --hddisk / --drive=hd: option to be broken. I’m going to dive deeper into the code base to figure out.

Took a look at /etc/libvirt/libxl/offsec.xml

    <disk type="block" device="disk">
      <driver name="phy" type="raw"/>
      <source dev="/dev/dm-1"/>
      <backenddomain name="disp678"/>
      <target dev="xvdi" bus="xen"/>

That looks good.

    <type arch="x86_64" machine="xenfv">hvm</type>
    <loader type="rom">hvmloader</loader>
    <boot dev="cdrom"/>
    <boot dev="hd"/>

That looks less good. There is no hint that the machine is supposed to boot from the fourth harddisk. Hvmloader - Xen - sounds like it basically passes the emulated hardware hence the block devices to SeaBIOS.

At the moment I assume one can attach a harddisk to the HVM but only boot from an attached CD/DVD (block device or iso which is setup as loop device).

Though it might be possible to sneak a

<bootmenu enable="yes"/>

in there.

This thread belongs into a category yet to be established:

Hang on. Let me overthink this.

I started a couple of my HVMs from an .iso which uses an isolinux-bootloader. That bootloader offers a

Hardware Information (HDT)

option. And it looks like all working HVMs have Grub2 on /dev/xvda’s master boot record but the offsecVM does not. Why does VMWare start my Kali-Box? I assume the structure of the vmdk looks like this:

   ---nbd0           <--- MBR with Grub2

As far as I know it is not possible to add a MBR to a partition.

It works… create a mounter-VM (appVM based on fedora-template) and the aforementioned offsec-HVM. For the HVM pay attention to allow 4GB of RAM.

Use the mount and umount script from the first post, but without

sudo kpartx -av /dev/nbd0 and
sudo kpartx -d /dev/nbd0

After running the offsecHVM once, enforce persistence of offsecVM’s tweaks/qemu-settings:

[user@dom0 ~]$ sudo su -l
[root@dom0 ~]$ cp /etc/libvirt/libxl/offsec.xml /etc/qubes/templates/libvirt/xen/by-name/
[root@dom0 ~]$ cd /etc/qubes/templates/libvirt/xen/by-name/
[root@dom0 ~]$ nano offsec.xml

Tweak the offsec.xml at the appropiate section like this

    <disk type='block' device='disk'>
      <driver name='phy' type='raw'/>
      <source dev='/dev/nbd0'/>
      <backenddomain name='mounter'/>
      <target dev='xvda' bus='xen'/>
# remove xvdb and xvdc
    <disk type='block' device='cdrom'>
      <driver name='phy' type='raw'/>
      <source dev='/dev/loop0'/>
      <backenddomain name='mounter'/>
      <target dev='xvdd' bus='xen'/>

In case your .vmdk is convoluted (.vmdks are some kind of sparse images) starting the offsecVM can take a long time. That was the reason why I thought the HVM crashes in the first place and thought I have to mount the partitions.

A word of warning: make a backup of your .vmdk and at the beginning you might want to mount the image read-only by adding the -r option

&& sudo qemu-nbd -r -c /dev/nbd0 $file \

For me the HVM has been running without damaging the .vmdk so far. And it is running a lot faster under Xen/Qemu then under VMware.

For trouble shooting you can setup an ISO as loop device like this:

[user@mounter ~]$ sudo losetup -r /dev/loop0 gparted.iso

The loop device can be detached like this when not needed anymore:

[user@mounter ~]$ sudo losetup -D
1 Like