Bad torrenting performance when using mirage-fw

My setup for my torrenting qube looks like the following:
sys-net ← mirage-fw ← sys-vpn ← mirage-fw-inner ← torrent-vm

I noticed that I get horrible performance when torrenting. With the default mirage-fw settings of 1 CPU i get mostly ~1MiB/s on average, 1.5 MiB/s max. What makes this problem obvious is that sys-mirage-fw-inner cpu utliization is >98-99% all the time.
To get my usual torrenting performance of 20 or more MiB/s I had to assign 6 vCPUs (and 320MB RAM, but I dont think that this has any real effect when using mirage-fw if I remember correctly) to mirage-fw-inner and the utilization is still at ~55%

What I find interesting, that this problem only exists for mirage-fw-inner. mirage-fw is configured with 32 MB memory and 1 vCPUs all the time and works just fine.

My questions:

  • Is this an bug - or an unsolveable downside when using mirage-fw because of the potentialy tens or hundreds of connection that are created when torrenting?
  • Does adding more memory help with this kind of issue? (I dont think this is the case)
  • Besides not using mirage-fw-inner at all or assigning more vCPUs (and RAM): is there anything other than that I can do?
1 Like

I see there is an open issue here, but it might not be up to date:

Does restarting your mirage-fw-inner have any effect on its bandwidth? Like, is there a transient period of good throughput after qvm-shutdown --force --wait mirage-fw-inner ; qvm-start mirage-fw-inner?

2 Likes

I also saw this issue, but my problem is primarily that this happens only in an extreme way when torrenting, not on normal browsing, where there is no noticable difference

there is no effect on its bandwidth just by restarting it. i have to increase the vcpus to fix this.

2 Likes

Hello all,
There is currently no use for multi-vcpu in mirage-fw so it should not help here, but allocating more memory can help reduce packet flow stress as it permits to keep more “pending packets” before starting to handle memory pressure.
You can see here that under memory pressure, the unikernel forces a memory collection and drops incoming packets. The former eats CPU usage, and the later forces reconnections, both are bad :frowning:

To me, there are two places where things can go wrong with a lot of connections, as it’s the case with torrents. Either the NAT table is always full and always remove old entries (the current table default size is 5000 entries, with 7/8 of them for TCP flows), which forces to redo the NAT process everytime, or the total memory is too low to handle that number of connections.
As the first mirage unikernel is the bottleneck, it sounds normal that the second unikernel has no CPU issue, it’s a pretty calm network for it :slight_smile:

Maybe you can try out increasing the total memory, and set a bigger nat table, e.g.:

qvm-prefs mirage-fw-inner memory 320
qvm-prefs mirage-fw-inner max_mem 320
qvm-prefs mirage-fw-inner -- kernelopts '--nat-table-size 500000'

If that fails, I’ll have to dig into the issue, maybe the best would be to continue in an issue on GitHub (I’ll be almost AFK until 26).

5 Likes

32Mb is certainly not enough to handle tor circuits and everything, even without torrents. I increased to 64 and it is ok so far, with 32Mb I suffered huge packet loss, up to 7-10% on peak load.

3 Likes

thank you, this solved my problem. happy holidays.

4 Likes

Perhaps the community can work up some guidelines for NAT table sizing and memory assignment for mirage-fw qubes of different use cases. It’s common to include multiple firewall qubes in a netvm topology; some of them could be the standard 5000/32 and others sized up for their expected purpose.

3 Likes

FYI, I kicked this off over here:

3 Likes