Thunar slow when accessing SMB shares in Fedora 42 / Qubes 4.3

I recently upgraded my 4.2 installation to 4.3 and at the same time migrated from Fedora 41 to F42 as a TemplateVM.

In /rw/config/rc.local I have a few network shares that I automount by calling mount <sharename> <localpath> -t cifs -o credentials=<credfile>,x-systemd.automount,x-systemd-device-timeout=10,x-systemd.idle-timeout=60,uid=1000,gid=1000".

I noticed a significant performance drop when accessing the share location via Thunar and I think that this is related to the upgrade (more likely F42 than Qubes 4.3). When browsing the mounted local path, Thunar takes up to several seconds until it shows the folder content and seems to reload everything whenever I enter or exit a folder.

Notice that the connection goes over WiFi and through a VPN and I know that SMB/CIFS is notoriously bad in higher-latency scenarios. However, I have used this setup for years and now just after the upgrade I notice a big performance decrease. Therefore I think that this is likely related to the upgrade.

There is no gvfs-mount involved (which is another thing that’s known for bad performance).

Has anyone experienced similar issues after upgrading? Any idea what I could change to further test to boost performance?

Edit: It seems that this isn’t just Thunar. A ls -al in the same folder is slow as well.

I think the mount.cifs caching strategy changed a while ago to be reduced. You should check cifs mount option to force the caching, I’m quite sure it will solve your issue.

Thanks for the hint. According to the man page of mount.cifs, caching changed from “loose” to “strict” in Kernel 3.7. That is quite some time before F42 ;).

I validated (by listing the existing shares with mount) that, indeed, caching=strict was used by all shares. I remounted with caching=loose but do not notice any difference. Strangely, the behaviour (as described, long loading times every time the folder is accessed) feels as if there is no caching at all.

Listing of a share with 120 folders takes about six (!) seconds.

Switching back to Fedora 41 (just realized I still have a clone of that template) shows about the same timing on the command line, but navigating the shares in Thunar feels much faster. Could there be any caching in Thunar itself that masquerades the low latency, which was changed/deactivated in Fedora 42?

Do you see the same speed change when using ls in command line?

Erf, I jumped a line while reading.

Did you try using thunar to access the remote without using mount.cifs? This would use gvfs, the implementation is way different.

The short version is that I migrated away from gvfs a while ago due to various performance and usability issues. At the time, I still used the (then-default) gnome-based Fedora templates and Gnome’s gio backend has some unresolved issues itself (I think I still have an open bug report with the Gnome project regarding this).

To be able to test this, I need to install gvfs-smb, which isn’t default (at least in the XFCE-based templates). Will report back once I did that.

Edit: No idea if you are informed for post edits @solene, therefore I’ll ping you directly. Installed gfvs-smb and re-tested. No idea how I can cd to a gvfs mount nowadays, they used to be available in /run/user/<uid>/gvfs/. In Thunar however, the gvfs mount is even slower than the cifs mount. Manual stopwatch timing indicates about two to three times worse performance (I measured up to 24(!) seconds of loading time).

1 Like

I’m using SMB shares in r4.2 with FC42 and have no “several seconds to show folder”.
Maybe because my folders aren’t big. Just 40-100 files/folders.
And I don’t use thumbnails ever in any of my systems.

@KitsuneNoBaka which Kernel are you using for the VM? (Qube Settings → Advanced → Kernel)

Another thing that I just realised:

  • Using Thunar or the CLI is still incredibly slow in the mounted share (most recent measurement: 13 seconds in Thunar to list ~120 folders).
  • The “Open/Save file” dialog (e.g., from inside a text editor) lists directories almost instantly. I can only feel a marginal difference between local and network folders.

6.12.63-1.qubes.fc37.x86_64

After I realised the difference between Thunar and File Open/Save I whipped out the trusty Wireshark and had a look at the network traffic.

Turns out what happens is basically:

File Open/Save:

  • does a “FILE_INFO/SMB2_FILE_ALL_INFO”
  • followed by “SMB2_FIND_ID_FULL_DIRECTORY_INFO” (server responds with the list of objects in the folder)
  • for the folder I am actually trying to access.

All in, browsing to the folder, accessing it and showing the subfolders within resulted in 207 packets visible in Wireshark. All requests/responses more or less target only the folder that I am actually trying to interact with.

At the same time, Thunar does about the same interaction, but for every single subfolder inside of the one I want to access.
This results in 1610 packets in Wireshark.

I could go into more detail but don’t think that it is helpful. Essentially, Thunar (and I suppose, the Terminal as well due to their similar timings) already look one step ahead and do some kind of recursive directory listing. However, they don’t cache that information but instead do so every single time I interact with an SMB object.

To me, that looks awfully broken.

For completeness sake: It doesn’t seem to be kernel related.
I reverted from the r4.3 default (6.17.9-1.fc41) back to 6.12.59-1.fc41 and the behaviour doesn’t change.

[deleted]

For me, wireshark shows only SMB2_FIND_ID_FULL_DIRECTORY_INFO for files and folder inside a folder I’ve entered and return number of elements inside of each, but it’s not doing same thing for sub-folders (don’t traverse them).

Together with a colleague of mine (big shout-out to him!) I did some more testing and we found:

  • There is a new F43 template in the Qubes stable repo that isn’t even publicly announced yet
  • F43 shows the same behaviour.
  • Initially I thought it doesn’t and this is because… drumroll

The issue only appears in Thunar’s “List view” (analog to -l in ls output) and the new template by default uses Icon View.

  • Switching to either “Icon view” or “Compact view” shows a drastic speed improvement in both F42 and F43
  • equally fast is ls without the long option
  • Thunar in F41 is fast even in List View
  • as stated before, ls -l in F41 is slow

It seems that somehow, the more recent Thunar version (or more likely, a library that it build upon) changed how it interacts with the filesystem.

I suppose this is not a Qubes bug then(?). If I have some spare time I’ll test with a bare metal XFCE installation (Fedora or potentially Xubuntu). If these confirm the behaviour I’ll see where to address this bug (the XFCE team, most likely).

More interestingly: how can Thunar in F41 be faster than ls -l and show similar (timestamp) information for all folders? Potentially, there is a deeper lying inefficiency, maybe even in cifs-utils itself?

oof

This suggests you control the SMB/CIFS server or may know someone who does.

SMB/CIFS was originally intended to be used on a LAN.

This experience you are having may not be in the purview of Qubes specifically. Maybe you will have better luck with the debian-13-xfce template. If I had some need to make SMB/CIFS work best I would reach for Gentoo (for me personally, not necessarily the community Gentoo template).

For your use case, your easiest option is probably Syncthing probably with smaller folders behind the SMB/CIFS share. Windows has a somewhat “native” optional SSH package that you could use Git over. If you want to really nomad it up you could use Git over NNCP.

Syncthing can be manually configured to run only over your VPN.

What you are experiencing could be a local GUI issue and not a network issue.

@de_dust2 Thanks for your input. Yes, I do know the administrator of the SMB server personally. To give more context: This is a company setup.

Everyone on our small team (roundabout 10 people) uses Qubes for office work and Intranet access. For specific use cases (working with the sometimes inevitable Microsoft Office documents, usage of specific application software, …) we have dedicated systems depending on the use case.

As usual in company environments, we have a central file share for data storage. This is what I am trying to access.
Even though I see your point on LAN-usage of SMB shares, every “standard” company that uses Windows Active Directory with Microsoft clients and servers can realize the “remote worker” scenario without major hiccups. I think Linux (in general, not specifically Qubes) must be able to follow in some way.

Regarding the possibility of a GUI vs a network issue: the local folder structure can be browsed without any noticeable lag in any view setting of Thunar. Additionally, there’s the Wireshark output indicating lots of network traffic.

We also did some testing regarding Debian templates yesterday. The situation is… complicated I’d say. On debian-13-xfce:

  • ls -al – is slow (>5 seconds) initially but every subsequent call “only” takes about 0.9 seconds. It seems that some kind of caching actually happens.
  • Thunar “List View” – is slow (10+ seconds) every single time. No visible caching.
  • Thunar “Icon View” – is fast (<1 second).

One thing to note is that Debian 13 uses a very recent cifs-utils version (7.4-1), even newer than what F43 provides.

Another thing to note (just for completeness sake really and in case anyone else stumbles upon this): searching (Ctrl+F or pressing the magnifying glass) automatically switches to “List View” and therefore is very slow, even before one types in the search parameter (of course the actual search is always slower on a network share, but that is not what I mean).

How the GUI app is contending with the SMB part is probably where the major hiccup (that is visible to you) is occuring.

Your team as described sounds ripe for a much better user experience with Syncthing.

Fifteen years ago this was the case. Since around 2012 I have seen most places use Dropbox or other SaaS.

The larger companies that are steeply rooted in legacy ways are still using Active Directory and SMB/CIFS but they have dedicated low-quality staff (the Windows admin types) who spend a lot of time jumping through hoops to make it all work. In such places secure boot is never enabled, the IT teams have no idea what a TPM chip is, CPU cores hover at 100% for “endpoint protection solution” to do their thing, and all users are Administrator on the machines they use. Some of these companies which I know of the internals do handle a LOT of sensitive personal data of the citizenry but they don’t know what the fuck they’re doing.

Syncthing is written in a memory-safe language (GoLang) and runs on trash OS Windows. Syncthing most likely does not step on SMB/CIFS server’s toes. You can manually configure Syncthing to not use any of the volunteer-run Syncthing discovery services/relays and be accessible over only the VPN.