I have mounted a Wifi 7 access point to improve the connection speed of my laptop, having a Wifi 7 compatible adapter. Everything seems to work fine except that the speed is not what i would expect. Maybe somebody can shed some light on my doubts.
First an iperf test from the access point to the iperf server in the same network:
Here I do not understand why wifi is not reaching the same speed as the connection from ap to iperf server? Is there anything I am missing?
I already tried to assign pcores to sys-net and also to increase sched-credit2 values. Nothing helps. Sometimes the speed reaches almost 600mbits/s, but this is still way less than I would expect.
By the way this is the wifi card in my laptop (a Novacustom v54)
sys-net is based on Fedora 42 with all recent updates and the newest kernel.
The AP I am using is a U7 Pro Max from Ubiquiti.
Any hints of what could go wrong here? Is anybody using Wifi 7 with Qubes and has better speeds?
I don’t see a mention of this in your OP.
Because of the CPU pinning one virtual CPU is always executed on the same physical CPU and thus does not get its cache invalidated so often.
Here is what I meant you to try, for example, if you have CPU with 16 cores:
Set 4 cores (0-3) to dom0 and pin them:
Add the following to GRUB_CMDLINE_XEN_DEFAULT:
dom0_max_vcpus=4 dom0_vcpus_pin
Regenerate the GRUB config and reboot:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Configure CPU pinning for qubes:
Create /usr/share/qubes/templates/libvirt/xen-user.xml file with this content:
{% extends 'libvirt/xen.xml' %}
{% block basic %}
{% if vm.features.get('vcpu_pin_core', '0') == '1' -%}
<vcpu placement='static' cpuset='4-{{ 4 + vm.vcpus - 1 }}'>{{ vm.vcpus }}</vcpu>
<cputune>
{% for i in range(vm.vcpus) %}
<vcpupin vcpu='{{ i }}' cpuset='{{ 4 + i }}'/>
{% endfor %}
</cputune>
{% else -%}
<vcpu placement='static' cpuset="8-15">{{ vm.vcpus }}</vcpu>
<cputune>
{% for i in range(vm.vcpus) %}
<vcpupin vcpu='{{ i }}' cpuset='8-15'/>
{% endfor %}
</cputune>
{% endif -%}
{{ super() }}
{% endblock %}
For qubes with vcpu_pin_core feature set to 1 this will configure them to use CPU cores 4-7 with pinning.
All other qubes will use CPU cores 8-15 without pinning.
Set sys-net to use 4 VCPU and set vcpu_pin_core feature set to 1:
Just assigning pcores to sys-net is not enough.
You need to configure it so that dom0 and other qubes won’t use the CPU cores assigned to sys-net and you need to pin VCPUs to specific CPU core for sys-net, i.e. 0 VCPU - 0 CPU core, 1 VCPU - 1 CPU core etc.
Pinning like this with cpuset is not enough:
<vcpupin vcpu='{{ i }}' cpuset='0-15'/>
You need to pin each core separately like this:
{% for i in range(vm.vcpus) %}
<vcpupin vcpu='{{ i }}' cpuset='{{ 4 + i }}'/>
Have you tested this on other Linux distros to see whether this is a Qubes-specific issue?
Also, Wifi 6 and 7 are quite fickle at highed modulations and it’s dificult to reach above 64QAM without clear line of sight. See https://www.wiisfi.com for more info.
I have not tried other distros as I do not have any other linux laptop available. On my qubes laptop I tried Fedora 42 and Debian 13 for sys-net, both delivered the same result.
About CPU pinning. I am using the “xl vcpu-…” api “manually” in custom scripts. The tests I did for sys-net were a “loose pinning” of the performance cores to sys-net. I am not sure about the commands to execute to make the 1:1 pins between vcpu and cpu, but I also wonder if this would really help? When looking at the cpu load of sys-net during wifi transfer it is hardly above 20%.
I was asking in my OP whether anyone here in this forum already uses Wifi 7 hardware and has transfer speeds more in line with what the bandwidth provides. Theoretically the higher speeds are also possible with Wifi 6, it simply depends on the channel width.
Thanks for the help guys. Regardless whether the problem gets solved or not, I appreciate that you took the time to read and understand.
You can try Fedora Live USB for a test.
Otherwise there is no way to tell if it’s a problem specific to Qubes OS or a problem with hardware/firmware/driver.
I don’t know what do you mean by “loose pinning”, but you can pin the cores with xl vcpu-pin like this:
xl vcpu-pin sys-net 0 0
xl vcpu-pin sys-net 1 1
And pin all vcpus of other qubes/-dm/dom0 to other cores, for example:
Hi guys,
yesterday I transfered files via scp and was hitting speeds of 100MB/s. This reflects what I was expecting. Not sure though why iperf isi not showing those, but from my point of view the “problem” is solved.
Thanks again for reading and commentig here!