Windows 10 libvirt config syntax help to hide hypervisor info

Hi everyone,

I’m trying to install Solidworks on a Windows 10 VM. The particular version however does not support installations in various virtual environments, including Qubes’s Xen Hypervisor. Apparently this is not a unique problem based on this post on serverfault a few years ago

which include the exact same error message received on my VM. Being aware Qubes supports custom libvirt configs for VMs

https://dev.qubes-os.org/projects/core-admin/en/stable/libvirt.html

I modified the selected answer accordingly; while I was not able to follow the template inheritance method on the docs, since according to the SF answer the CPU block needs to be omitted, I decided to copy the template config file to the respective directory made for the vms configs, so in dom0

sudo cp /usr/share/qubes/templates/libvirt/xen.xml /etc/qubes/templates/libvirt/xen/by-name

and then of course rename it according to my win10 vm’s name. So this is what the /etc/qubes/templates/libvirt/xen/by-name/win10.xml file I ended up with

<domain type="xen" xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
    {% block basic %}
        <name>{{ vm.name }}</name>
        <uuid>{{ vm.uuid }}</uuid>
        {% if ((vm.virt_mode == 'hvm' and vm.devices['pci'].persistent() | list)
            or vm.maxmem == 0) -%}
            <memory unit="MiB">{{ vm.memory }}</memory>
        {% else -%}
            <memory unit="MiB">{{ vm.maxmem }}</memory>
        {% endif -%}
        <currentMemory unit="MiB">{{ vm.memory }}</currentMemory>
        <vcpu placement="static">{{ vm.vcpus }}</vcpu>
    {% endblock %}
    {# cpu block definition omitted #}
    <os>
        {% block os %}
            {% if vm.virt_mode == 'hvm' %}
                <type arch="x86_64" machine="xenfv">hvm</type>
                <!--
                     For the libxl backend libvirt switches between OVMF (UEFI)
                     and SeaBIOS based on the loader type. This has nothing to
                     do with the hvmloader binary.
                -->
                <loader type="{{ "pflash" if vm.features.check_with_template('uefi', False) else "rom" }}">hvmloader</loader>
                <boot dev="cdrom" />
                <boot dev="hd" />
            {% else %}
                {% if vm.virt_mode == 'pvh' %}
                    <type arch="x86_64" machine="xenfv">pvh</type>
                {% else %}
                    <type arch="x86_64" machine="xenpv">linux</type>
                {% endif %}
                <kernel>{{ vm.storage.kernels_dir }}/vmlinuz</kernel>
                <initrd>{{ vm.storage.kernels_dir }}/initramfs</initrd>
            {% endif %}
            {% if vm.kernel %}
                {% if vm.features.check_with_template('no-default-kernelopts', False) -%}
                <cmdline>{{ vm.kernelopts }}</cmdline>
                {% else -%}
                <cmdline>{{ vm.kernelopts_common }}{{ vm.kernelopts }}</cmdline>
                {% endif -%}
            {% endif %}
        {% endblock %}
    </os>

    <features>
        {% block features %}
            {% if vm.virt_mode != 'pv' %}
                <pae/>
                <acpi/>
                <apic/>
                <viridian/>
            {% endif %}

            {% if vm.devices['pci'].persistent() | list
                    and vm.features.get('pci-e820-host', True) %}
                <xen>
                    <e820_host state="on"/>
                </xen>
            {% endif %}
        {% endblock %}
    </features>

    {% block clock %}
        {% if vm.virt_mode == 'hvm' %}
            {% set timezone = vm.features.check_with_template('timezone', 'localtime').lower() %}
            {% if timezone == 'localtime' %}
                <clock offset="variable" adjustment="0" basis="localtime" />
            {% elif timezone.isdigit() %}
                <clock offset="variable" adjustment="{{ timezone }}" basis="UTC" />
            {% else %}
                <clock offset="variable" adjustment="0" basis="UTC" />
            {% endif %}
        {% else %}
            <clock offset='utc' adjustment='reset'>
                <timer name="tsc" mode="native"/>
            </clock>
        {% endif %}
    {% endblock %}

    {% block on %}
        <on_poweroff>destroy</on_poweroff>
        <on_reboot>destroy</on_reboot>
        <on_crash>destroy</on_crash>
    {% endblock %}

    <devices>
        {% block devices %}
            {#
                HACK: The letter counter is implemented in this way because
                Jinja does not allow you to increment variables in a loop
                anymore. As of Jinja 2.10, we will be able to replace this
                with:
                {% set counter = namespace(i=0) %}
                {% set counter.i = counter.i + 1 %}
            #}
            {% set counter = {'i': 0} %}
            {# TODO Allow more volumes out of the box #}
            {% set dd = ['e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',
                'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y']
            %}
            {% for device in vm.block_devices %}
                <disk type="block" device="{{ device.devtype }}">
                    <driver name="phy" />
                    <source dev="{{ device.path }}" />
                    {% if device.name == 'root' %}
                        <target dev="xvda" />
                    {% elif device.name == 'private' %}
                        <target dev="xvdb" />
                    {% elif device.name == 'volatile' %}
                        <target dev="xvdc" />
                    {% elif device.name == 'kernel' %}
                        <target dev="xvdd" />
                    {% else %}
                        <target dev="xvd{{dd[counter.i]}}" />
                        {% if counter.update({'i': counter.i + 1}) %}{% endif %}
                    {% endif %}

                    {% if not device.rw %}
                        <readonly />
                    {% endif %}

                    {% if device.domain %}
                        <backenddomain name="{{ device.domain }}" />
                    {% endif %}

                    {% if device.script %}
                        <script path="{{ device.script }}" />
                    {% endif %}
                </disk>
            {% endfor %}

            {# start external devices from xvdi #}
            {% set counter = {'i': 4} %}
            {% for assignment in vm.devices.block.assignments(True) %}
                {% set device = assignment.device %}
                {% set options = assignment.options %}
                {% include 'libvirt/devices/block.xml' %}
            {% endfor %}

            {% if vm.netvm %}
                {% include 'libvirt/devices/net.xml' with context %}
            {% endif %}

            {% for assignment in vm.devices.pci.assignments(True) %}
                {% set device = assignment.device %}
                {% set options = assignment.options %}
                {% include 'libvirt/devices/pci.xml' %}
            {% endfor %}

            {% if vm.virt_mode == 'hvm' %}
                <!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
                <emulator
                    {% if vm.features.check_with_template('linux-stubdom', True) %}
                        type="stubdom-linux"
                    {% else %}
                        type="stubdom"
                    {% endif %}
                    {% if vm.netvm and not
                        vm.features.check_with_template('linux-stubdom', True) %}
                        cmdline="-net lwip,client_ip={{ vm.ip -}}
                            ,server_ip={{ vm.dns[1] -}}
                            ,dns={{ vm.dns[0] -}}
                            ,gw={{ vm.netvm.gateway -}}
                            ,netmask={{ vm.netmask }}"
                    {% endif %}
                    {% if vm.stubdom_mem %}
                        memory="{{ vm.stubdom_mem * 1024 -}}"
                    {% endif %}
                    />
                <input type="tablet" bus="usb"/>
                <video>
                    <model type="{{ vm.features.check_with_template('video-model', 'vga') }}"/>
                </video>
                {% if vm.features.check_with_template('linux-stubdom', True) %}
                    {# TODO only add qubes gui if gui-agent is not installed in HVM #}
                    <graphics type="qubes"/>
                {% endif %}
            {% endif %}
                <console type="pty">
                    <target type="xen" port="0"/>
                </console>
        {% endblock %}
    </devices>
    {# added the following according to the answer #}
    <qemu:commandline>
	    <qemu:arg value='-cpu'/>
        {# parallel of kvm=off ??? #} 
	    <qemu:arg value='host,xen=off'/>
	    <qemu:arg value='-smbios'/>
	    <qemu:arg value='type=0,vendor=LENOVO,version=FBKTB4AUS,date=07/01/2015,release=1.180'/>
	    <qemu:arg value='-smbios'/>
	    <qemu:arg value='type=1,manufacturer=LENOVO,product=30AH001GPB,version=ThinkStation P300,serial=S4M88119,uuid=cecf333d-e511-97d5-6c0b843f98ba,sku=LENOVO_MT_30AH,family=P3'/>
    </qemu:commandline>
</domain>

<!-- vim: set ft=jinja ts=4 sts=4 sw=4 et tw=80 : -->

however when I restarted the VM, SolidWorks had not been fooled since I got the same error message as before (activation license mode is not supported in this virtual environment). This was also apparent from the systeminfo output from cmd

Microsoft Windows [Version 10.0.17763.379]
(c) 2018 Microsoft Corporation. All rights reserved.
[...]
System Manufacturer:       Xen
System Model:              HVM domU
System Type:               x64-based PC
Processor(s):              1 Processor(s) Installed.
                           [01]: Intel64 Family 6 Model 58 Stepping 9 GenuineIntel ~2494 Mhz
BIOS Version:              Xen 4.8.5-29.fc25, 1/4/2021
[...]
Hyper-V Requirements:      A hypervisor has been detected. Features required for Hyper-V will not be displayed.

Since most docs I’ve seen for this look to be more focused on kvm and Xen’s docs is a bit scarce, does anyone have any idea on how to circumvent this type of hypervisor detection from Windows?

Thanks

EDIT: Taking a closer look at the libvirt XML config docs, I modified the config file again as follows (removed qemu:commandline section, added sysinfo), unfortunately without any success (systeminfo output is the same as above)

<domain type="xen">
    {% block basic %}
        <name>{{ vm.name }}</name>
        <uuid>{{ vm.uuid }}</uuid>
        {% if ((vm.virt_mode == 'hvm' and vm.devices['pci'].persistent() | list)
            or vm.maxmem == 0) -%}
            <memory unit="MiB">{{ vm.memory }}</memory>
        {% else -%}
            <memory unit="MiB">{{ vm.maxmem }}</memory>
        {% endif -%}
        <currentMemory unit="MiB">{{ vm.memory }}</currentMemory>
        <vcpu placement="static">{{ vm.vcpus }}</vcpu>
    {% endblock %}
    <os>
        {% block os %}
            {% if vm.virt_mode == 'hvm' %}
                <type arch="x86_64" machine="xenfv">hvm</type>
                <!--
                     For the libxl backend libvirt switches between OVMF (UEFI)
                     and SeaBIOS based on the loader type. This has nothing to
                     do with the hvmloader binary.
                -->
		<loader type="{{ "pflash" if vm.features.check_with_template('uefi', False) else "rom" }}">hvmloader</loader>
                <boot dev="cdrom" />
                <boot dev="hd" />
		<smbios mode="sysinfo" />
            {% else %}
                {% if vm.virt_mode == 'pvh' %}
                    <type arch="x86_64" machine="xenfv">pvh</type>
                {% else %}
                    <type arch="x86_64" machine="xenpv">linux</type>
                {% endif %}
                <kernel>{{ vm.storage.kernels_dir }}/vmlinuz</kernel>
                <initrd>{{ vm.storage.kernels_dir }}/initramfs</initrd>
            {% endif %}
            {% if vm.kernel %}
                {% if vm.features.check_with_template('no-default-kernelopts', False) -%}
                <cmdline>{{ vm.kernelopts }}</cmdline>
                {% else -%}
                <cmdline>{{ vm.kernelopts_common }}{{ vm.kernelopts }}</cmdline>
                {% endif -%}
            {% endif %}
        {% endblock %}
    </os>
    <sysinfo type="smbios">
	    <bios>
		    <entry name="vendor">LENOVO</entry>
		    <entry name="version">FBKTB4AUS</entry>
		    <entry name="date">07/01/2015</entry>
		    <entry name="release">1.180</entry>
	    </bios>
	    <system>
		    <entry name="manufacturer">LENOVO</entry>
		    <entry name="product">30AH001GPB</entry>
		    <entry name="version">ThinkStation P300</entry>
		    <entry name="serial">S4M88119</entry>
		    <entry name="uuid">{{ vm.uuid }}</entry>
		    <entry name="sku">LENOVO_MT_30AH</entry>
		    <entry name="family">P3</entry>
	    </system>
    </sysinfo>
    <features>
        {% block features %}
            {% if vm.virt_mode != 'pv' %}
                <pae/>
                <acpi/>
                <apic/>
                <viridian/>
            {% endif %}

            {% if vm.devices['pci'].persistent() | list
                    and vm.features.get('pci-e820-host', True) %}
                <xen>
                    <e820_host state="on"/>
                </xen>
            {% endif %}
        {% endblock %}
    </features>

    {% block clock %}
        {% if vm.virt_mode == 'hvm' %}
            {% set timezone = vm.features.check_with_template('timezone', 'localtime').lower() %}
            {% if timezone == 'localtime' %}
                <clock offset="variable" adjustment="0" basis="localtime" />
            {% elif timezone.isdigit() %}
                <clock offset="variable" adjustment="{{ timezone }}" basis="UTC" />
            {% else %}
                <clock offset="variable" adjustment="0" basis="UTC" />
            {% endif %}
        {% else %}
            <clock offset='utc' adjustment='reset'>
                <timer name="tsc" mode="native"/>
            </clock>
        {% endif %}
    {% endblock %}

    {% block on %}
        <on_poweroff>destroy</on_poweroff>
        <on_reboot>destroy</on_reboot>
        <on_crash>destroy</on_crash>
    {% endblock %}

    <devices>
        {% block devices %}
            {#
                HACK: The letter counter is implemented in this way because
                Jinja does not allow you to increment variables in a loop
                anymore. As of Jinja 2.10, we will be able to replace this
                with:
                {% set counter = namespace(i=0) %}
                {% set counter.i = counter.i + 1 %}
            #}
            {% set counter = {'i': 0} %}
            {# TODO Allow more volumes out of the box #}
            {% set dd = ['e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',
                'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y']
            %}
            {% for device in vm.block_devices %}
                <disk type="block" device="{{ device.devtype }}">
                    <driver name="phy" />
                    <source dev="{{ device.path }}" />
                    {% if device.name == 'root' %}
                        <target dev="xvda" />
                    {% elif device.name == 'private' %}
                        <target dev="xvdb" />
                    {% elif device.name == 'volatile' %}
                        <target dev="xvdc" />
                    {% elif device.name == 'kernel' %}
                        <target dev="xvdd" />
                    {% else %}
                        <target dev="xvd{{dd[counter.i]}}" />
                        {% if counter.update({'i': counter.i + 1}) %}{% endif %}
                    {% endif %}

                    {% if not device.rw %}
                        <readonly />
                    {% endif %}

                    {% if device.domain %}
                        <backenddomain name="{{ device.domain }}" />
                    {% endif %}

                    {% if device.script %}
                        <script path="{{ device.script }}" />
                    {% endif %}
                </disk>
            {% endfor %}

            {# start external devices from xvdi #}
            {% set counter = {'i': 4} %}
            {% for assignment in vm.devices.block.assignments(True) %}
                {% set device = assignment.device %}
                {% set options = assignment.options %}
                {% include 'libvirt/devices/block.xml' %}
            {% endfor %}

            {% if vm.netvm %}
                {% include 'libvirt/devices/net.xml' with context %}
            {% endif %}

            {% for assignment in vm.devices.pci.assignments(True) %}
                {% set device = assignment.device %}
                {% set options = assignment.options %}
                {% include 'libvirt/devices/pci.xml' %}
            {% endfor %}

            {% if vm.virt_mode == 'hvm' %}
                <!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
                <emulator
                    {% if vm.features.check_with_template('linux-stubdom', True) %}
                        type="stubdom-linux"
                    {% else %}
                        type="stubdom"
                    {% endif %}
                    {% if vm.netvm and not
                        vm.features.check_with_template('linux-stubdom', True) %}
                        cmdline="-net lwip,client_ip={{ vm.ip -}}
                            ,server_ip={{ vm.dns[1] -}}
                            ,dns={{ vm.dns[0] -}}
                            ,gw={{ vm.netvm.gateway -}}
                            ,netmask={{ vm.netmask }}"
                    {% endif %}
                    {% if vm.stubdom_mem %}
                        memory="{{ vm.stubdom_mem * 1024 -}}"
                    {% endif %}
                    />
                <input type="tablet" bus="usb"/>
                <video>
                    <model type="{{ vm.features.check_with_template('video-model', 'vga') }}"/>
                </video>
                {% if vm.features.check_with_template('linux-stubdom', True) %}
                    {# TODO only add qubes gui if gui-agent is not installed in HVM #}
                    <graphics type="qubes"/>
                {% endif %}
            {% endif %}
                <console type="pty">
                    <target type="xen" port="0"/>
                </console>
        {% endblock %}
    </devices>
</domain>

<!-- vim: set ft=jinja ts=4 sts=4 sw=4 et tw=80 : -->
1 Like

Xen does not support changing this part of CPUID out of the box. Whatever you set in libvirt, Xen will always announce itself in CPUID 0x40xxxxxx leafs. Some people attempt patch Xen for that: Spoofing CPUID Results in the Linux-Stubdom Crashing · Issue #4980 · QubesOS/qubes-issues · GitHub, but as you can see, it isn’t fully successful.

Marek, thank you for your answer, guess I’m out of luck.

Though I don’t see why Xen would restrict altering the particular CPUID leaf. Is this something that could possibly be resolved in the 4.1 release? Or is it an ‘innate’ Xen implementation problem?

Xen uses those CPUID leafs to actually communicate useful information to the VM, not “just” to let the VM know it’s running on top of Xen. It is used by PV drivers to bootstrap communication channels. I guess very few people needs a knob to break this. I guess the crashes observed in the linked issue may be related to breaking this.

1 Like

Would it be possible to hide Xen from guest OS as an HVM without Xen PV drivers or would it not make any difference?
I have the same problem as @breyer and was looking for potential solutions.
Do you have any direction as to what would be needed to be able to hide Xen’s info.
Please forgive my limited knowledge of Xen virtualization.

from testing on kvm, it seems that Solidworks does not like any QEMU strings, which are replaced when virtio drivers are installed. On Xen, the Windows PV drivers do a similar thing, and I have little reason to doubt Solidworks would not like them, unless Citrix signs theirs differently, as Solidworks officially supports them. This should mean that the Xen cpuid should theoretically not be a problem. However, even with PV drivers, Solidworks will not install on a Xen HVM while it will on KVM + virtio.

Perhaps they also check for some bios strings and do not like hardcoded values, which kvm supports the rewriting of, but xen does not support until version 4.18 This is also why @breyer’s edits for smbios arguments did not work, as xen doesnt support those under qubes yet, and are not related to the xen cpuid passed to the gues.
Supposedly people have found a workaround in regular xen, to create the vm but not boot it, then manually edit strings, then boot it. I am leaving the links here in case anyone is interested, though it seems like a lot of work to test.

On a tangent, an interesting thing that I found was that Xen might? support passing custom cpuid values for hvm guests based on this documentation under the cpuid=[“leaf:reg=bitstring”], though do note that this is for a .cfg vm config, not xml. I tried testing it quickly but was not able to get the vm to boot, will look at errors more in depth later, though frankly just waiting for qubes to support a xen that allows custom smbios arguments is probably easier.