Workaround to Attach Block and USB Devices to Win Qube(s) Under 4.2

In a clean Win qube manually uninstall any Renesas devices/drivers (if any) in Device Manager, and then install

RENESAS_Chipset_USB3-D720200_A03-X2NF0_SETUP_ZPE.exe

It might be the same ones, but after the procedure, USB attaching works, at least for me.

Be aware of the old bug never fixed: once (storage, of course) device attached as USB device, it’ll be lost from the widget list of block devices until next Qubes restart. Terminating qui-devices doesn’t bring it back and this is so annoying for a decade.

Now I can confirm that I successfully succeeded to install Xen PV v9.x drivers in the qube mentioned in previous post (testsigning on), but still and again wasn’t able to start Win qube. It just crashes as all others with signed Xen PV drivers.

@apparatus Here’s the log:

guest-W10-22H2-dm.log.gz (37.0 KB)

It contains several restarts of the VM, the last one including the RENESAS driver that @tempmail mentioned above. The log contains the long text mentioned above, with and without the RENESAS driver, but the Windows VM still does not see the USB drive. Checking the xen-hvm-stubdom-linux package showed no errors, as before.

Can you shutdown your WIndows qube, run this command in dom0:

journalctl -f -n0

Start the Windows qube and try to attach the USB device then post the resulting log?

Did you look for it in Device Manager, under Universal Serial Bus Controllers, as USB storage device unable to be loaded?

And as usual, I’m getting these errors, I couldn’t find online much about:

Aug 18 08:52:55 dom0 kernel: btrfs_print_data_csum_error: 3315 callbacks suppressed
Aug 18 08:52:55 dom0 kernel: _btrfs_printk: 575 callbacks suppressed
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816930816 csum 0x926297f2 expected csum 0x3a28bfb9 mirror 1
Aug 18 08:52:55 dom0 kernel: btrfs_dev_stat_inc_and_print: 3315 callbacks suppressed
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448910, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816934912 csum 0x8941f998 expected csum 0xf1212412 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448911, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: blk_print_req_error: 655 callbacks suppressed
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816930816 csum 0x926297f2 expected csum 0x3a28bfb9 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448912, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816934912 csum 0x8941f998 expected csum 0xf1212412 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448913, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816930816 csum 0x926297f2 expected csum 0x3a28bfb9 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448914, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816934912 csum 0x8941f998 expected csum 0xf1212412 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448915, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816930816 csum 0x926297f2 expected csum 0x3a28bfb9 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448916, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816934912 csum 0x8941f998 expected csum 0xf1212412 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448917, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816930816 csum 0x926297f2 expected csum 0x3a28bfb9 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448918, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): csum failed root 5 ino 3630376 off 816934912 csum 0x8941f998 expected csum 0xf1212412 mirror 1
Aug 18 08:52:55 dom0 kernel: BTRFS error (device dm-0): bdev /dev/mapper/luks-e63d6d00-a9e7-48a3-a560-c68400dd6fb9 errs: wr 0, rd 0, flush 0, corrupt 35448919, gen 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
Aug 18 08:52:55 dom0 kernel: BTRFS warning (device dm-0): direct IO failed ino 3630376 op 0x0 offset 0x30b15000 len 16384 err no 10
Aug 18 08:52:55 dom0 kernel: I/O error, dev loop78, sector 1595560 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0

Is loop78 your Windows qube volume?
Check it with this command:

sudo losetup -l | grep loop78

I guess this could be related to the crashes, but I don’t know what is causing these BTRFS errors.

Returns nothing

It works if the qube is running.
I guess there is no way to check which qube was it after you shutdown the qube. Or maybe you can check the journalctl and search for loop78 there.

Sorry, I’m not able to follow you on this.

Aug 18 08:57:04 dom0 kernel: loop78: detected capacity change from 0 to 146800640
Aug 18 08:57:11 dom0 kernel: I/O error, dev loop78, sector 1313840 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0

From this, I can conclude that “they are not coming from the disk itself but rather from the connection to it.”, which indicates it’s something between dom0 and the qube, not about the qube (content/“disk”) itself

You can use this command to check to which running qube the loop device is related e.g.:

[root@dom0 user]# losetup -l | grep loop112
/dev/loop112         0      0         0  0 /var/lib/qubes/appvms/disp1008/root-dirty.img                     1     512

But when you shutdown the qube (e.g. disp1008) then you won’t be able to use this command to check it anymore.

If you shutdown the qube then you can still use journalctl to check which qube used this loop device e.g.:

dom0 qubesd[881783]: vm.disp1008: Starting disp1008
dom0 qubesd[881783]: Created sparse file: '/var/lib/qubes/appvms/disp1008/volatile-dirty.img~3zp3el_e'
dom0 qubesd[881783]: Hardlinked file: '/var/lib/qubes/vm-templates/whonix-workstation-17/root.img' -> '/var/lib/qubes/appvms/disp1008/root.img'
dom0 qubesd[881783]: Hardlinked file: '/var/lib/qubes/appvms/whonix-ws-17-dvm/private.img' -> '/var/lib/qubes/appvms/disp1008/private.img'
dom0 qubesd[881783]: Renamed file: '/var/lib/qubes/appvms/disp1008/volatile-dirty.img~3zp3el_e' -> '/var/lib/qubes/appvms/disp1008/volatile-dirty.img'
dom0 qubesd[881783]: Reflinked file: '/var/lib/qubes/appvms/disp1008/private.img' -> '/var/lib/qubes/appvms/disp1008/private-dirty.img~wghp2h22'
dom0 qubesd[881783]: Renamed file: '/var/lib/qubes/appvms/disp1008/private-dirty.img~wghp2h22' -> '/var/lib/qubes/appvms/disp1008/private-dirty.img'
dom0 qubesd[881783]: Reflinked file: '/var/lib/qubes/appvms/disp1008/root.img' -> '/var/lib/qubes/appvms/disp1008/root-dirty.img~0p4jcq0_'
dom0 qubesd[881783]: Renamed file: '/var/lib/qubes/appvms/disp1008/root-dirty.img~0p4jcq0_' -> '/var/lib/qubes/appvms/disp1008/root-dirty.img'
dom0 kernel: loop112: detected capacity change from 0 to 41943040
dom0 kernel: loop113: detected capacity change from 0 to 4194304
dom0 kernel: loop114: detected capacity change from 0 to 25165824
dom0 kernel: loop115: detected capacity change from 0 to 1360768

This issue is not related to Windows, it seems you have some issue with HVM qubes in general. So the issue is somewhere in dom0 or maybe in sys-usb.
When I create empty HVM qube, set stubdom-qrexec 1 for it and attach the USB device to it I can see the same log of USB attachment in guest-QUBENAME-dm.log.

The issue cannot be in sys-usb as my two R4.2 test systems run from USB-SSDs and thus have no sys-usb.

Furthermore, I don’t think that it is a general problem with dom0 or HVMs. USB drives attach without problem to Linux HVMs, and there is no need to set stubdom-qrexec for that. So, if anything at all, the problem might be confined to that component, but I wonder how this could then be identical for R4.1 and R4.2.

The behavior seems to hint to the Windows VMs: dom0 rightly attaches the USB drive to the Windows qube, but, without the Xen PV disk driver, there is no software in Windows recognizing that. (Someone is knocking at the door, but no one is at home, or so…)

Are you sure that you’re attaching your USB drive as USB device and not as block device? E.g. using qvm-usb attach and not qvm-block attach CLI tool or using Qubes devices widget and attaching the device from “USB Devices” list and not from “Data (Block) Devices” list.
I think you can’t attach USB devices to other qubes without sys-usb.

You can attach the block devices only if you have Xen PV drivers installed in the qube.

Are these HVMs based on a template or not?
I was talking about HVMs not based on a template.

This confuses me. If there’s no sys-usb, how then devices can be attached to any qube with qvm-usb?
Also, to remind, that I am the second one to confirm that qvm-usb works without qwt (and I have sys-usb if that is relevant).

1 Like

I have to attach the devices as block devices via qvm-block attach or the device list, because dom0 does not expose them as USB devices, probably because of the missing sys-usb, but this works for non-Windows qubes (tested with standalone HVM qube based on Debian 12.

That explains a lot!

Unfortunately, my R4.2 systems are both running on USB drive. So I cannot check the behavior with sys-usb present.

On an R4.1.2 system with sys-usb, I have the following behavior with a new Windows 10 Pro system without QWT and without the Xen PV disk driver, but with stubdom-qrexec set:

  • qvm-usb works, using the local device ID of sys-usb.
  • qvm-block complains that sys-usb does not expose this device ID, and accordingly, does not work.
  • The device widget shows the device with the sys-usb device ID and works; obviously, it invokes the qvm-usb command and not the qvm-block command.

So, under Qubes R4.1.2, there is no need to have QWT or the Xen PV disk driver installed in the Windows qube in order to access a USB drive. Could someone check this with an R4.2 installation running sys-usb?

Regarding the many confusing details of our discussion above, I have filed an issue Provide a unified device handling function #9422 to simplify device handling. In my opinion, it is currently much too complicated and, consequently, error-prone.

@marmarek already closed this issue, being more or less a duplicate of #8537. Let’s hope that there’s some progress in this area.

I have already confirmed this on this topic?

Out of nowhere, today I wasn’t able to uninstall drivers prior to shutdown VM, it becoming unresponsive upon clicking “Uninstall”, so I was forced to kill it.

I tried then to start it, which I succeeded, only to found out that the drivers were still installed! And so, now it became fully functional with v8.2.2.1 drivers installed. Win11 is not in testing mode, though!

Probably it has something to do with recent dom0 updates, but I am not able to trace it down. Maybe you should check it now too.

Testing mode is not required for the v8.2.2.1 drivers, but only for the QWT graphic driver, which is needed for seamless mode under Windows 7 and so cannot be installed under Windows 10 or 11.

Uninstalling drivers under Windows is some sort of roulette, anyhow.

1 Like