Create a Gaming HVM

your setup:

[2024-01-12 14:55:14] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
...
[2024-01-12 14:55:14] pci 0000:00:00.0: reg 0x30: [mem 0x000c0000-0x000dffff pref]
...
pci 0000:00:00.0: can't claim BAR 6 [mem 0x000c0000-0x000dffff pref]: address conflict with Reserved [mem 0x000a0000-0x000fffff]

my setup:

[2024-01-11 00:09:23] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved 
...
[2024-01-11 00:09:23] pci 0000:00:00.0: reg 0x30: [mem 0xfb000000-0xfb07ffff pref] 

It seems that for some reason, on your setup, xen is assigning the BAR 6 of your GPU on a memory range it does not allow to be used ?
nearly 100% sure it is the issue

1 Like

Thanks a lot for going the extra mile. I’ll give it a go probably Sunday.

1 Like

try to compile the stubdom without this line https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/blob/main/qemu/patches/series#L21

2 Likes

Did a clean 4.2 install and still getting “No Bootable Device” on both Linux and Windows HVMs unfortunately, going to rollback to 4.1 for now!

@Cameron I’ve found that happens when I don’t include the “gpu_” prefix (after having patched the stubdom). Do you have this issue even when trying to start up the qube with 2MB of RAM or less?

1 Like

This is on 4.2 without running without the stubdom pach so no prefix should be required, but I did have it there nonetheless.

Applying the stubdom patch on a clean 4.2 install does get it to boot with 32GB of memory, but I’m getting the same blue screen shortly after boot as here HVM no bootable device after gpu passthrough - #12 by Cameron and no PCI devices except the NVMe are functional.

Removing the prefix and setting the memory to 2GB allows it to boot part way but it throws the same bluescreen before it boots fully (nvlddmkm.sys). Removing the GPU allows this one to boot fully but again none of the PCI devices except for the NVMe drive appear to work.

The PCI devices all appear to not function as well, no network adapter, no USB, no GPU apart from the 4 sensors it briefly displays before BSODing.

I have tried with and without Above 4G Decoding (I had this on previously, doesn’t seem to have any affect) and Resizable BAR (this prevents Qubes from displaying video at any point past GRUB for me)

Going to install 4.1 and see if that still works properly.

EDIT: Fresh 4.1 install with just the stubdom patch applied works perfectly so it’s definitely an issue with 4.2.

2 Likes

I have now tested this with similar results. I tested it on Qubes 4.2 with testing updates (and thus xen 4.17.2-8). Here are the logs:

dnf-list.log (721 Bytes)
guest-manjaro.log (38 Bytes)
guest-manjaro-dm.log (40.9 KB)
lspci.log (1.4 KB)
xl-dmesg.log (114.0 KB)

In particular, I still see the following messages:

[2024-01-14 15:16:15] pci 0000:00:00.0: can't claim BAR 6 [mem 0x000c0000-0x000dffff pref]: address conflict with Reserved [mem 0x000a0000-0x000fffff]
[2024-01-14 15:16:15] pcifront pci-0: Could not claim resource 0000:00:00.0/6! Device offline. Try using e820_host=1 in the guest config.

This seems to be similar to before. However, this VM didn’t seem to have a display output on my main screen (the one where dom0 is), so I couldn’t see the same “no bootable device” text I did in the previous setup. But I did see something similar in xl dmesg:

(d7) Booting from Hard Disk...
(d7) Boot failed: could not read the boot disk
(d7) 
(d7) enter handle_18:
(d7)   NULL
(d7) Booting from Floppy...
(d7) Boot failed: could not read the boot disk
(d7) 
(d7) enter handle_18:
(d7)   NULL
(d7) No bootable device.

So I guess we’re back to XEN debugging. Unfortunately my time for this project is kind of running out. So I don’t know when I’ll be able to continue these tests. I’m hoping someone manages to figure it out.

2 Likes

Thanks you ! :slight_smile:

If others people have the same issue, can dedicate a significant amount of time on this project and have some skills to write and read basic C code, I can help to try to debug xen/qemu.
But since I don’t have the required hardware to have this issue to test it properly, I won’t try to solve it alone

4 Likes

I was able to edit the grub file to contain the exclusion. After I did that, this instead of getting the usual qubes box for LUKS password, this comes up. But if I hit enter, it prompts me for the password through the terminal and I am able to boot Qubes up.

1 Like

I was also able to create a an HVM qube, install windows tiny11, add 4GB ram, install all c++ and dotNET3.5, QWT, all good. Until I attempt to pass my GPU card through the Devices tab in the windows qube settings.

Then I get the “guest has not initialized the display (yet)]” error. But if I lower the RAM to under 2GB then it boots, with the gpu passed through, but although I installed the original nvidia drivers for that specific GPU, this is what it looks like in the device manager, and nothing is displayed through the HDMI output that is connected to a separate monitor.
signal-2024-01-15-011834_002

Any direction pointers?

Thank you for the help. I feel that I am getting closer and closer.

1 Like

Hi neowutran! I would love to offer my hardware virtually and be a guinea pig and help find a solution for myself and others, as it seems that the 4.2 has messed up a lot of HVM’s.

I will even pay $500 for a successful outcome, or $200 for trying and failing. To get things going, I can pay the $200 upfront and difference if successful.

This offer stands for any active members on qubes-os forum that have experience with GPU passthrough and max ram issues that came with the 4.2/xen updates.

A few conditions: We must get a Win10 or Win11 to work with minimum of 16GB RAM and be able to pass my Nvidia GPU successfully. While we attempt this, try and keep the qubes-os security and integrity as high as possible (avoiding downgrades and cleaning up as we attempt things that fail). I am also not the most adept unix user, so patience is key. I can follow most instructions if written in detail and provide feedback.

What I have tried so far and failed:

  • excluding gpu and subaudio from grub using rd.qubes.hide_pci and regenerating the grub
  • passing the gpu and gpu subaudio with permissive and no-strict-reset True
  • attempting to install both video drivers provided by Dell and Nvidia on virgin clones of the fresh Win
  • used this XML script in /etc/qubes/templates/libvirt/xen/by-name/ with both 2G and 3,5G and confirmed in virsch that the changes are reflected. In both cases, it did not work to boot with ram > 2GB
      {% if vm.virt_mode == 'hvm' %}
            <!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
            <emulator
                {% if vm.features.check_with_template('linux-stubdom', True) %}
                    type="stubdom-linux"
                {% else %}
                    type="stubdom"
                {% endif %}
                {% if vm.netvm %}
                    {% if vm.features.check_with_template('linux-stubdom', True) %}
                    {% if (vm.devices['pci'].persistent() | list) %}
                    cmdline="-qubes-net:client_ip={{ vm.ip -}}
                        ,dns_0={{ vm.dns[0] -}}
                        ,dns_1={{ vm.dns[1] -}}
                        ,gw={{ vm.netvm.gateway -}}
                    ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
                    {% else %}
                    cmdline="-qubes-net:client_ip={{ vm.ip -}}
                        ,dns_0={{ vm.dns[0] -}}
                        ,dns_1={{ vm.dns[1] -}}
                        ,gw={{ vm.netvm.gateway -}}
                    ,netmask={{ vm.netmask }}"
                    {% endif %}
                    {% else %}
                    {% if (vm.devices['pci'].persistent() | list) %}
                    cmdline="-net lwip,client_ip={{ vm.ip -}}
                        ,server_ip={{ vm.dns[1] -}}
                        ,dns={{ vm.dns[0] -}}
                        ,gw={{ vm.netvm.gateway -}}
                    ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
                    {% else %}
                    cmdline="-net lwip,client_ip={{ vm.ip -}}
                        ,server_ip={{ vm.dns[1] -}}
                        ,dns={{ vm.dns[0] -}}
                        ,gw={{ vm.netvm.gateway -}}
                    ,netmask={{ vm.netmask }}"
                    {% endif %}
                  {% endif %}
                {% endif %}
                {% if vm.stubdom_mem %}
                    memory="{{ vm.stubdom_mem * 1024 -}}"
                {% endif %}
                {% if vm.features.check_with_template('audio-model', False)
                or vm.features.check_with_template('stubdom-qrexec', False) %}
                    kernel="/usr/libexec/xen/boot/qemu-stubdom-linux-full-kernel"
                    ramdisk="/usr/libexec/xen/boot/qemu-stubdom-linux-full-rootfs"
                    {% endif %}
                    {% if not vm.netvm %}
                {% if (vm.devices['pci'].persistent() | list) %}
                    cmdline="-machine xenfv,max-ram-below-4g=3.5G"
                    {% endif %}
                    {% endif %}
                    />

I had someone help me with these tips and tricks, but they ran out of ideas. I am willing to give it one more go and see if there is any way I can avoid using two laptops and can compress all my work in one.

I have tried attaching some logs you requested from another user but, as a new user, I am not allowed. So I will paste them here. These logs were copied when the windows qube was started with > 2GB of ram and it was stuck at the “could not read the boot disk” black screen. Not sure where I have to add the grub parameter ‘loglvl=all’ and ‘guest_loglvl=all’. This is without it.

dnflist


Errors during downloading metadata for repository 'qubes-dom0-cached':
  - Curl error (37): Couldn't read a file:// file for file:///var/lib/qubes/updates/repodata/repomd.xml [Couldn't open file /var/lib/qubes/updates/repodata/repomd.xml]
Error: Failed to download metadata for repo 'qubes-dom0-cached': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Ignoring repositories: qubes-dom0-cached
xen.x86_64                                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-linux.x86_64                      4.2.8-1.fc37                      @anaconda         
xen-hvm-stubdom-linux-full.x86_64                 4.2.8-1.fc37                      @anaconda         
xen-hypervisor.x86_64                             2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                               2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached
lspci

sudo lspci -vvv -s 01:00.0
01:00.0 VGA compatible controller: NVIDIA Corporation GA104GLM [RTX A4500 Laptop GPU] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: Dell Device 0b2b
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at 93000000 (32-bit, non-prefetchable) [size=16M]
	Region 1: Memory at 6000000000 (64-bit, prefetchable) [size=16G]
	Region 3: Memory at 6400000000 (64-bit, prefetchable) [size=32M]
	Region 5: I/O ports at 3000 [size=128]
	Expansion ROM at 94080000 [disabled] [size=512K]
	Capabilities: [60] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [78] Express (v2) Legacy Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s, Width x8 (downgraded)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range AB, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [b4] Vendor Specific Information: Len=14 <?>
	Capabilities: [100 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
			Status:	NegoPending- InProgress-
	Capabilities: [250 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [258 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=255us PortTPowerOnTime=10us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Capabilities: [128 v1] Power Budgeting <?>
	Capabilities: [420 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
	Capabilities: [900 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [bb0 v1] Physical Resizable BAR
		BAR 0: current size: 16MB, supported: 16MB
		BAR 1: current size: 16GB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB 16GB
		BAR 3: current size: 32MB, supported: 32MB
	Capabilities: [c1c v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [d00 v1] Lane Margining at the Receiver <?>
	Capabilities: [e00 v1] Data Link Feature <?>
	Kernel driver in use: pciback
	Kernel modules: nouveau

sudo lspci -vvv -s 01:00.1
01:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
	Subsystem: NVIDIA Corporation Device 0000
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin B routed to IRQ 17
	Region 0: Memory at 94000000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [60] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [78] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 75W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s, Width x8 (downgraded)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range AB, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [160 v1] Data Link Feature <?>
	Kernel driver in use: pciback
	Kernel modules: snd_hda_intel

Thank you and please contact me privately if you or anyone is open to dedicating some time and patience in getting to the bottom of this.

3 Likes

Let’s try.
I do believe your issue is not the same as the one @deeplow is having.
I think it is a nvidia driver bug, let’s try to debug it

3 Likes

I finally got my eGPU device visible in dom0, for some reasons it seems my thunderbolt cable is whacky and it it’s not absolutely properly connected, the case fans start spinning but no devices on the computer. It’s working fine though, I tried in a liveCD and I was able to use the eGPU :sweat:

Now, I face an issue, it seems that dom0 still attach drivers to my eGPU devices, which prevent the HVM to use it.

I have this in /etc/default/grub, should it be enough? dom0 is still loading nouveau module :frowning:

GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX usbcore.authorized_default=0 rd.qubes.hide_pci=04:00.0,04:00.1"

Output of lspci -vnn for these devices:

04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3283]
	Flags: fast devsel, IRQ 20
	Memory at 82000000 (32-bit, non-prefetchable) [virtual] [size=16M]
	Memory at a0000000 (64-bit, prefetchable) [disabled] [size=256M]
	Memory at b0000000 (64-bit, prefetchable) [disabled] [size=32M]
	I/O ports at 2000 [disabled] [size=128]
	Expansion ROM at 83000000 [virtual] [disabled] [size=512K]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Legacy Endpoint, MSI 00
	Capabilities: [100] Virtual Channel
	Capabilities: [250] Latency Tolerance Reporting
	Capabilities: [128] Power Budgeting <?>
	Capabilities: [420] Advanced Error Reporting
	Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
	Capabilities: [900] Secondary PCI Express
	Kernel driver in use: pciback
	Kernel modules: nouveau

04:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3283]
	Flags: fast devsel, IRQ 21
	Memory at 83080000 (32-bit, non-prefetchable) [virtual] [size=16K]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Kernel driver in use: pciback
	Kernel modules: snd_hda_intel

It should be hidden from dom0, it says the kernel driver in use is pciback

1 Like

that was my guess too, but for some reasons it still attach to it :frowning:

I don’t understand what you means by “it still attach to it” ?

I mean, dom0 is loading drivers to it like nouveau or snd_hda_intel

De ce que je vois des logs, c’est bien le driver pciback qui est utilisé. C’est le comportement attendu. Qu’il y ai une indication que le driver “nouveau” pourrait etre utilisé sur ces peripherique n’est pas un soucis.
Et même, le driver nouveau pourrait etre utilisé par le systeme dans le cas où tu as plusieurs GPU nvidia, mais ca ne devrait pas poser de probleme pour ce GPU là

ok thanks! (you used French, was it on purpose? :D)

So my issue when loading the GPU driver in a VM is maybe not related to this :sweat:

inside the vm, check the result of “sudo dmesg”. What driver are you trying to use ?

( oui, je voulais être sur qu’on s’etait bien compris )