Windows 10 Questions

Greetings all.

Setting about to get a Win 10 Pro qube running, I’ve been doing some digging, but I wanted to ask a question about what I want to attempt, as it seems most of the documentation is about first-time installs.

So, here’s my situation. Before Qubes, I was attempting to get Xen going from scratch on CentOS 7. I had so many issues with it that I just had to give up. However, I was able to get a clean Windows 10 Pro VM working on it, just never could get the PCI passthrough to work (due to OVMF issues). So, I have a LVM volume group with three LVs (boot, data, swap) in it on a dedicated SSD which has that working Windows install on it. It used the qemu-supplied VNC video driver, and I have all the Xen PV drivers (v9.0) already installed in it. I want to try and see if I can use that install, but I am not sure how to get a qube standalone setup adapted to use an existing LVM VG and LVs, rather than stuffing it in the vm-pool allocation. I expect I will have to install the QWT tools as well (is there a way to do that to the dedicated LVs in the Win10 VG?). I guess the first order of business is to determine if and how I can substitute the existing VG/LVs in the qube config. Looking at the qvm-prefs doc, I don’t see anything that can set or change the volume arrangements. Is there any doc on the underlying configs? Like, does Qubes generate and use xl vm .cfg files under-the-hood? If so, where might they be stored and can I modify them, or do I have to modify a higher-level meta-config file? (for reasons like “don’t edit this file directly, it is regenerated by xxx automatically, so any changes won’t be persistent”)

Also, looking at the Community QWT doc, it looks like the Qubes Video Driver isn’t working/available yet for Win10. Is there any VNC/SPICE support available? Ultimately, if GPU Passthrough will work, I will probably disable the emulated adapters, but until I can get it to work, I think emulated VNC/SPICE or similar will be required.

Again, thank you in advance for any advice.

After investigating some more, it appears that qubes directly interfaces with libvirt through qrexec, building its own domain definition, and to do what I am wanting to do would seem to require some modifications to qubes-core-admin to accomplish, as I don’t see a way to “turn off” the vm-pool storage allocation or substitute it with dedicated local volumes in a different vg pool. It also looks like qvm-start is the only way to add a single dedicated volume (and only temporarily).

I’ve found the qubes xml data for the vms, and I see that there’s a copy of the libvirt xml that is generated for each qube started in /etc/libvirt/libxl. I don’t think that changes would be persistent, however, as it looks like it regenerates them every time they are started.

At this point, for testing purposes, would it be more useful to build /start a domain manually using virsh or xl? I realize that it would exist outside of qubes management, but I just want to see if the pci vga passthrough will work (and I’ve a bit more experience with it from recent weeks). I just want to make sure I won’t explode qubes if I manage it that way. :slight_smile:

Another quick question:

In the qvm-create-windows-qube repo readme:

“A more streamlined and secure installation process with packaging will be shipping with Qubes R4.1.”

Has this been done? I haven’t found any documentation about it being the case.

Here’s what I decided to do: I just created a test win10 qube using this command, which worked beautifully. After everything was done, I decided to add the PCI passthrough to it to see what would happen. Unfortunately, when I started it, I got a blank window, and eventually qrexec timed out and killed the qube.

From examining the logs, it appears I am having the same problem outlined in this thread:

Basically, Xen’s “BIOS-provided physical RAM map” has a reserved area:
Xen: [0x00000000000a0000-0x00000000000fffff] reserved

and the GPU’s BAR 6 attempts to allocate space in that area:
pci 0000:04:00.0 reg 0x30: [mem 0x000c0000-0x000dffff pref]

pcifront pci-0: claiming resource 0000:04:00.0/6
pci 0000:04:00.0: can’t claim BAR 6 [mem 0x000c0000-0x000dffff pref]: address conflict with Reserved [mem 0x000a0000-0x000fffff]
pcifront pci-0: Could not claim resource 0000:04:00.0/6! Device offline. Try using e820_host=1 in the guest config.

[00:09.0] xen_pt_realize: Assigning real physical device 04:00.0 to devfn 0x48

[00:09.0] xen_pt_register_regions: Expansion ROM registered (size=0x00020000 base_addr=0x000c0000)

[00:09.0] xen_pt_config_reg_init: Offset 0x0030 mismatch! Emulated=0x0000, host=0xc0002, syncing to 0x0002.
(there are a bunch of these, but this one seems to be syncing differently than the others, which sync to the correct host value)

Am I running afoul of some AMD incompatibility nonsense?

PS: qvm_create_windows_qube is pretty amazing! :slight_smile:

PPS: I noticed that that reserved memory space is the 640KB-1MB EMS/HiMem range. The 80s called, they want their memory limits back. XD I mean, really, is that still a thing now?

About the iGPU PT, I’m sure the problem goes way beyond that, but there’s something I think @yann didn’t report about : the “init primary VGA adapter” setting in BIOS/UEFI.
You can select IGD or PCIe, maybe that helps for a better VGA/GPU initialization ?

Nope, it uses libvirt XMLs, unfortunately ^^
I’ve not tested it yet, but I think you can create domUs the Xen way, as xl commands are avail.
For per-domain customizations, you can use this doc : Custom libvirt config — core-admin mm_203ee458-0-g203ee45-dirty documentation
In short, you create per-domain XML files which extend Qubes one.

Ahah, emm386.exe anyone ? ^^

PS: I also come from a vanilla Xen background, on Debian w/o libvirt, but it’s working flawlessly.
What do you think about a guide “from Xen to Qubes” ? Which could later be extended to “from hypervisor X to Qubes” (by others ^^) ?
It could report stuff like “how you did it → how to do it on Qubes”.
As most hypervisors share the same “config principles”, I was thinking it could be useful for people wanting to quit using KVM, VMware, etc, to catch the Qubes train.
It’s just it would take me forever to do it alone, plus as I run Qubes nested I can’t test/use everything.

Yeah, that’s been my challenge ever since I started this journey with Xen. Trying first to get VGA/PCI Passthrough working on a single adapter, and now trying to get it working with a secondary adapter. Unfortunately, my BIOS setup doesn’t have an option to select the primary/boot adapter. I’ve dug around a bit looking for a way to potentially patch it, but have not come up with anything solid yet.

Someone made a ReBAR bios patch which sorta does what I think I would have to do, but I haven’t dug into the guts of UEFI BIOS (and specifically AMI UEFI BIOS) to know where/how to hook in to select a boot adapter. There’s also a patch for vgaarb which someone created some time back which would solve it at the kernel level, but I’d need to build a custom kernel after figuring out how to apply it to current-day vgaarb. There’s a way to solve this, I know, but it’s going to take me a lot of time to tease it out. In the meantime, I need a working system to be able to research it.

Thanks for the Custom libvirt info! I think that maybe can solve my specific volume issues with a windows qube… will read into it. Unfortunately, I still don’t think I will be able to get a GPU passthrough working with this funky adapter setup. I wish I could just swap the video cards, but the physical and thermal issues make that a no-go (the RX6950XT would be exhausting right on top of my NVME boot SSD, and that would bake it to a crisp).

Yeah boy, HIMEM.SYS for the win! :stuck_out_tongue: It’s been a LONG time since I’ve had to monkey with that stuff.

I was able to get things running on Xen/CentOS7, but passthrough just caused OVMF to crash so many ways that I couldn’t fix or get assistance with, so Qubes is my last shot in the dark to getting a HV setup working.

As far as docs go, after two months of googling and reading documentation, wikis, reddit posts, forum posts, random blog posts, and other sites, Virtualization doc, specifically for Xen, needs a serious overhaul. Most of the info is not dated well, so there’s no way to know if it is even valid anymore, and the new stuff is very sparse. Virtualization has come quite a ways in the last decade, but it still has a ways to go and needs much better documentation so more people can get involved with it.

It’s more than just introductory documentation that is needed, too. More detailed technical information on what all is going on under the hood, so tech-competent users have some idea what they are looking at would be nice.

I’ve written technical info for wikis before, so I wouldn’t mind helping efforts to improve the documentation situation, but obviously, it’s gotta be for something I’m engaged with (read: gotten to work / use regularly).

Anyway, as I posted in the linked thread, I am starting to think that maybe what is happening is some confusion over configuration due to the primary adapter getting partially configured before it is consigned to pciback, and then when it is passed through, the re-configuration is causing conflicts. I suppose I could test this theory by just using the 6950 as the Qubes GUI card and pass through the 550 and see what happens. That still won’t be conclusive, as the 6950 may still have passthrough issues by itself even if I somehow manage to get BIOS/vgaarb to swap them around, but if it “works” on the 550 being passed through, maybe there’s a chance. If it doesn’t, then the whole passthrough thing is going to be a bust, and I can probably stop wasting time on it and just go back to WOBM and do Hyper-V (please god no).

Thank you for the response, though! I do appreciate it. :slight_smile:

I’ll answer quickly and “messily” as I’ve no time now, but I’ll develop more tomorrow if you need/want.

My working Xen setup is :

  • BIOS boot on a MSI B350 PC-Mate, Ryzen 1700x 8c/16t
  • AMD RX580 on the primary/x16 PCIe slot → PT to my gaming win7 HVM domU. I modprobe-blacklist the amdgpu driver, and tell in xorg.conf to only use the nvidia card. The GPU does not support FLReset, but I found ways to make it work, ie. I can reboot w7 w/o rebooting dom0
  • nvidia GT710 on a PCIe x1 slot (the small ones) → for dom0 GUI
  • the 2nd “GPU/PCIe x8 slot” (the one meant for SLI/crossfire) is occupied by a 10GbE NIC to connect to my backup dom0
  • both GPUs are connected to my 3 screens
  • Xen on debian stable, no libvirt. Using XFCE as dom0 GUI
  • working HVM domUs with Seabios, I never used OVMF/UEFI : Debians, pfsense (as a global dom0/domU fw, with MoBo NIC PT), TrueNAS core (freeBSD-based, with a NVMe optane PT), w7 (GPU+audio+USB controller: all PT), openBSD, Solaris 11.3, a nested Xen-on-Debian, and even … a nested Qubes \o/

Everything is flawless, maybe I’m just lucky with HW choices, or HW+SW combo.
On boot, the AMD GPU displays the BIOS, then GRUB, then some part of VT1 logs, then it automagically swaps the display to the nVidia GPU for the rest of VT1 logs (hence the display gets frozen on the AMD-connected screens from now on, till I PT it to w7 or another U), then it displays X.

Considering the “from Xen to Qubes” doc, I was talking about the Qubes documentation only.
For the Xen docs: I recently joined the Debian-Xen team because I wanted to help them - and thank them - and from there I learnt there’s a new guy responsible for the rework of the Xen docs (he’s working at Vates, creators of XCP-ng/XenOrchestra). I contacted him and I’m heavily pushing for the docs to be up-to-date really quickly. But will I succeed, it’s another story !

But I feel you, my first experiences with Xen were exactly as you said, painful : “googling and reading documentation, wikis, reddit posts, forum posts, random blog posts, and other sites, Virtualization doc, specifically for Xen”. There’s no easy way.
For Xen, I find the best ressource is the man pages on “Xen Documentation”. You read the wiki pages for general pointers/guidance, then check the last edit date on the bottom of the page, then read the man pages to adapt to the recent way of doing stuff.
Also, I found the documentation from XCP-ng may be of help sometimes, for anything not too related to XAPI stuff. Also the virt guide from Suse.
You can also mimick KVM/libvirt config stuff to Xen.
More later ^^ Don’t give up !

1 Like

Nice! Thanks for the details on your setup!

I just tried what I mentioned above, moving Qubes back to the primary 6950XT gpu (was a pain, because it requires nomodeset=0 to not hang Qubes, and no xorg.conf in /etc/X11 so I can get the GUI to come up).

I pciback’d the RX550 pci ID, and assigned it to the Win10 qube, then started it. Same thing, window comes up, trying to start, black window. 30 seconds pass, and qrexec_daemon complains about not being able to connect, and it kills the qube. Checking the dm log, I see no errors whatsoever. None on the PCI setup for the video card (unlike the 6950XT, which has the errors mentioned above), and no errors from anything else, but it still hangs and dies.

sigh I seem to be chasing an untamed ornithoid without cause. >.<

How much RAM did you assign for this VM? Check this:
Contents/gaming-hvm.md at master · Qubes-Community/Contents · GitHub

Regarding the problem with passthrough of your primary GPU check this issue:
rd.qubes.hide_pci doesn't work anymore after upgrade to Qubes 4.1 · Issue #7976 · QubesOS/qubes-issues · GitHub

It’s currently set at 8GB, but I’ve tried 16GB up to 64GB (system has 128GB memory) in it.

I did not do the TOLUD mod, because further reading into the github issue seemed to indicate that R4.1 versions were already patched (both xen-hvm-stubdom-linux and xen-hvm-stubdom-linux-full are at v1.2.5-1).

That said, I checked the rootfs images and they were indeed NOT patched, so I went ahead and patched them. I started the VM, and it actually booted into Windows. It also loaded the Microsoft AMD driver for the RX550, and the display is working! Woo! Progress!

I have a desktop displayed on the RX550, but of course I can’t move the mouse into it because it’s just using the Qubes Display/mouse. I have to figure out how to get the mouse/keyboard passed through directly into Windows, but before I do that, I need to swap GPUs, and I think the RX6950 is going to be a bit more of a problem child.

Thanks for the reminder to try that. :slight_smile:

You may be interested in this:

Quick update, I swapped back to the RX550 as the Qubes dedicated GPU and the RX6950 as the passthrough for win10. With the 3.5G patch, it also works!

Next tasks I have are:
1.Mouse/KB passthrough (going through that post above; sounds like what I want to do, thanks!). I would like Windows to recognize the device as the actual device (so I can load drivers for it, like the Razer Naga Mouse, and the Logitech G910 keyboard so the Razer/Logitech drivers can be installed and the devices managed with the utility programs).
2. Custom Volume settings.
3. Getting sound to work fully (I’d like to have the full support for my audio codecs and channels, if possible).
4. Possibly dropping the Qubes GUI for it, just in case things get weird with desktop/graphics settings.
5. Hotplug PCI passthrough – not high on the priority list, but would be nice.

One other thing about the GPU passthrough, when the VM boots, the windows logo and spinning dots appear only in the Qubes window. Only once it gets to or past the login screen does the GPU activate and display anything. If I disable the Qubes VGA adapter, will it display the boot up logo on the PT GPU?

1: I dunno if that changed, but I found USB passthrough kinda sluggish for games (your HW suppose gaming). My solution was to PT one of the USB controllers to the Win domU, and plug the dedicated KB/mouse to it, but that’s a bit cumbersome …

2+3 : As you’re on a desktop, there are some HW tricks to make audio work w/o touching software, but it depends on your setup AND requirements !

Your audio card has Audio Out and Line In jacks.
Your GPUs only have Audio Out via HDMI/DP, but sound is passed to your monitor(s).
On your monitors, you have an Audio Out jack.
Plug one of the monitors’ Audio Out into Line In of the soundcard with a jack-jack, and config the audio card to play input from the Line in.
As you may game and use vocal comms, note this won’t allow the mic to work.
So like in (1), I PT the audio card to the gaming machine, and use the same method but reversed (sound for dom0 and other qubes go through HDMI to monitor to Line In).
Another solution would be to use a cheap USB audio device.

5: I think you’re saying the Qubes manager won’t allow that, but it works using Xen commands.
xl pci-attach , xl block-attach, etc (xl help | grep attach).
I don’t know if it still works (last try was last year), if it’s recommended, and the implications concerning Qubes.

Just try it ^^ But for info, it works like that on my system too. Except it seems you also get a desktop from the VGA adapter ? I don’t, the VGA adapter gets deactivated, but I’m on w7 so YMMV (IIRC the Xen wiki talks about that).
I’ve never tried to remove this “feature”, because at least on my setup, if there’s a PT problem, the VGA adapter would still allow me to login to the VM and check what the problem is.

Thanks for the feedback in info!

I’m currently making an override libvirt xml template for my win10 vm using the Customization doc from a few posts above and had a few questions.

First, the doc mentions putting user templates and overrides in the /etc/qubes/templates/libvirt (or the xen/by-name dir inside of it). However, that dir (from templates/ forward) doesn’t exist. I presume it is the correct folder, but has to be created first?

Next, when I look at the default templates in /usr/share/qubes/templates/libvirt(/devices) look like they are missing something. For example, the xen.xml and devices/block.xml in the sections create a line that is missing the type=‘raw’ attribute, and a line that is missing the bus=‘xen’ attribute. Are these the actual templates used, or are there others somewhere else?

(What I mean is that when the actual final libvirt vm xml files are generated, they have these attributes, but they are nowhere to be found in the templates, which would seem to point to these templates not being the ones used to generate the final vm xml files)

In fact, it is even weirder that the template xml files use double-quotes (") as delimiters, but the generated <vm_name>.xml files use single quotes (') instead.

I think I can get the changes to work, but I am concerned that I am not editing the correct template files because of these weird little quirks.

Yes.

These are correct templates that are used to generate configs.

OK. Then I guess I am misunderstanding something about how it goes from the xen.xml template to the final generated version. As an example:

xen.xml:

        {% for device in vm.block_devices %}
            <disk type="block" device="{{ device.devtype }}">
                <driver name="phy" />
                <source dev="{{ device.path }}" />
                {% if device.name == 'root' %}
                    <target dev="xvda" />
                {% elif device.name == 'private' %}
                    <target dev="xvdb" />
                {% elif device.name == 'volatile' %}
                    <target dev="xvdc" />
                {% elif device.name == 'kernel' %}
                    <target dev="xvdd" />
                {% else %}
                    <target dev="xvd{{dd[counter.i]}}" />
                    {% if counter.update({'i': counter.i + 1}) %}{% endif %}
                {% endif %}

                {% if not device.rw %}
                    <readonly />
                {% endif %}

                {% if device.domain %}
                    <backenddomain name="{{ device.domain }}" />
                {% endif %}
            </disk>
        {% endfor %}

gpu_base-win10.xml (final):

<disk type='block' device='disk'>
  <driver name='phy' **type='raw'**/>
  <source dev='/dev/qubes_dom0/vm-gpu_base-win10-root-snap'/>
  <target dev='xvda' **bus='xen'**/>
</disk>

Where do those two specified attributes come from? Is there another config override xml that I am missing?

The reason I am concerned is if I make an override xml and put it in the /etc/qubes/templates/… folder, I am basing my edits/overrides on the existing templates I know about. If there’s one I am missing, it will likely muff something (with my luck, anyway :P). Is there any way to get it to test generate the final vm xml file without launching the vm?

Thanks! :slight_smile:

They are templates that will be used by Qubes tools to generate final config and not parts of config that will be put together as is.

Yeah, I was just using the devices section as an example to point out the discrepancy I am seeing.

I think I understand the basics of the templating system, I’m just having a bit of difficulty tracing the generation process going from the various templates to the end product. The way I understand it, is it passes the vm object to the jinja parser, and starts with the xen.xml template in /usr/share/qubes/templates/libvirt, which includes the three templates in the devices folder (block.xml, pci.xml, net.xml) if they are required, then goes to the /etc/qubes/templates/libvirt folders to pick up the xen-user.xml and the by-name/<vm_name>.xml templates as overrides.

None of the templates seem to add the type=‘raw’ and bus=‘xen’ attributes, so where do they come from? Are they added by another post-processing function after the templates have been processed or from another template I am unaware of? That’s what is concerning me that I am missing something.

Also, to do what I want to do, it would seem I need to define a “disks” block as a replaceable sub-block nested inside of the “devices” block. I don’t want to have to regen the entire “devices” block just to change out the disk sections. It would seem to be easier to just add some extra block definitions in the devices section of the base xen.xml template (as opposed to copy/pasting most of the devices block in an override template). The downside to doing that would be that the changes would get clobbered in a qubes update which updated the xen.xml template. That’s not horrible, though; I could always re-add them.

Adding them to a xen-user.xml override template is an alternative, but it would be a bit of copypasta that would still need to be examined after a qubes update which touched xen.xml.

I apologize if I am seem a bit dense. I’ve been looking at this the entire evening, and it’s 3AM, so I likely am missing the bleeding obvious from fatigue. :slight_smile:

It’s better to check the source code to see what’s happening.

Did you read this?
Custom libvirt config — core-admin mm_203ee458-0-g203ee45-dirty documentation