So these are based on templates not Standalone systems based on isos.
Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?
So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.
Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.
So these are based on templates not Standalone systems based on isos.
You can create a standalone qube from an existing template or a ISO.
Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?
Yes, it would work using a debian template or fedora template or …, it is just my personnal preference to use archlinux for this kind of things since it have all the latest bleeding edge drivers.
So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.
no
Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.
You have to run a live linux system to check yourself if it should theorically work without specific issues. You don’t “share” the result with dom0 after, it is just for you, for your understanding on how your hardware is wired. For what is a IOMMU group, you can check here and contribute: IOMMU groups – security impact
This is really hard to do. I’ve barely done any of this and already want to cry.
I don’t understand what is going on with xorg or what it means or why it’s even needed. I also don’t understand why I need a 2nd monitor. Can’t I just patch this directly to a VM?
You’ve given a lot of hints for Arch. I have no idea what to put in Debian. I’m on the verge of trying to just do this with Arch even though I don’t understand pacman or arch at all.
If I made a standalone VM with PopOS and NIVIDIA drivers or a standalone Windows machine with NVIDIA drivers and just attached the unused GPU to the VM in dom0, will it not work without Xorg?
I ran nividia-settings in PopOS and it showed the drivers weren’t loading. It said the X server isn’t available.
Back to Xorg hell.
So I went back to just using a Debian standalone template. I changed it to HVM since PVM won’t allow me to attach anything.
I installed nvidia-detect and it indicates I have a nivida graphics card. I installed nvidia drivers. I am still not sure how the second monitor, xorg, and another keyboard factors into all of this.
It looks like I do need a different monitor and using Xorg to pass through the GPU with HVM.
Does anyone know why I can’t just do this inside of the screen? I don’t understand why there needs to be different hardware. Is this a technical requirement of Qubes (having a different monitor) or has no one figured out how to make it work?
I tried GPU passthrough on a different laptop than last time (T470 vs NV41), but when I plug in my external GPU over thunderbolt, in lspci or qvm-pci the devices names don’t appear, they are still named “thunderbolt controller”
In case someone has an idea.
edit²: Well, on a Ubuntu live CD, the thunderbolt card doesn’t appear either I wonder if this is because I have a thunderbolt 3 gpu case and it’s a thunderbolt 4 port
edit³: tried again on Ubuntu, it turns out I didn’t plug the thunderbolt cable correctly , it shows up on Ubuntu but not qubes os now.
Here is lspci -vnn output for the thunderbolt related devices (GPU is connected )
00:0d.0 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 USB Controller [8086:461e] (rev 02) (prog-if 30 [XHCI])
Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
Flags: medium devsel
Memory at 80940000 (64-bit, non-prefetchable) [size=64K]
Capabilities: <access denied>
Kernel driver in use: pciback
Kernel modules: xhci_pci
00:0d.2 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 NHI #0 [8086:463e] (rev 02) (prog-if 40 [USB4 Host Interface])
Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
Flags: fast devsel, IRQ 17
Memory at 80900000 (64-bit, non-prefetchable) [size=256K]
Memory at 80970000 (64-bit, non-prefetchable) [size=4K]
Capabilities: <access denied>
Kernel driver in use: pciback
Kernel modules: thunderbolt
in the qube with passthrough, the thunderbolt controller is working
Just updated to 4.2 myself and am now unable to boot any HVM with over ~3GB of memory (Linux and Windows), getting the “No bootable device”. I tried reapplying your stubdom patch but reverted back to the original since I couldn’t get it working with that.
It looks like I’m on 4.17.2-8 though stubdom is a different version number, does this match your system?
EDIT: nevermind, I didn’t see it was about the legacy one
On an install started with 4.2-RC3, I also have these packages installed but I never tried GPU passthrough because this desktop can’t receive any external GPUs
In my case, the bios (Dasharo) doesn’t provide options to prioritize a GPU over another. So what I did was fix the issue from the failed qubes startup and switching to another TTY (Ctrl+Alt+F2). There was an annoying issue that kept switching back to TTY1 but that was fortunately solvable.
To allow me to see the login screen it turns out that X11 crashing was the failure reason (I saw that on lightdm logs).
Making dom0 Xorg work with an Nvidia Graphics Card
I don’t understand why this is even an issue if the GPU is supposedly hidden, but for some reason Xorg tries to accommodate the Nvidia graphics card instead of the intel graphics. I fixed this with a config in /etc/X11/xorg.conf.d/00-user.conf:
Now I’m stuck with an issue where the HVM can’t boot if any PCI-e device is attached. I see some others are also encountering this. I documented this issue in this other thread:
Can you check if there is any remaining custom code in xen.xml or stubdom (anything related to max-ram-below-4g) ?
can you provide the logs file and your configuration, (and eventually the xen logs (sudo xl dmesg) when you start the HVM (add the grub parameter ‘loglvl=all’ and ‘guest_loglvl=all’ to the host before)
can you provide the result of sudo dnf list | grep -E ^xen in dom0, and the result of sudo lspci -vvv -s YOUR_GPU in dom0
I upgraded from 4.1 with the dist upgrade (first with --all-pre-reboot, then rebooted, then --all-post-reboot).
xen-hvm-stubdom-legacy is also present on my T480 which was upgraded from 4.1 to 4.2.
I’ll play around with it a little and see if I can get it to work, worst case I think a clean install is a safe bet, thankfully have that down to a pretty good process at this point.