[PATCH 4/4] nvme-tcp: switch to 'cpu' affinity scope for unbound workqueues
Hannes Reinecke
hare at suse.de
Thu Jul 4 08:54:51 PDT 2024
On 7/4/24 11:11, Sagi Grimberg wrote:
>
>
> On 7/3/24 18:50, Hannes Reinecke wrote:
[ .. ]
>>
>> As you can see, with unbound and 'cpu' affinity we are basically on par
>> with the default implementations (all tests are run with per-controller
>> workqueues, mind).
>
> I'm puzzled that the seq vs. rand vary this much when you work against a
> brd device.
> Are these results stable?
>
There is quite a bit of flutter ongoing, but the overall picture doesn't
change.
>> Running the same workload with 4 subsystems and 8 paths will run into
>> I/O timeouts for the default implementation, but perfectly succeed with
>> unbound and 'cpu' affinity.
>> So definitely an improvement there.
>
> I tend to think that the io timeouts are caused by a bug, not by "non
> optimized" code. io timeouts are eternity for this test, which makes me
> think we have a different issue here.
I did some latency measurements for the send and receive loop, and found
that we are in fact starved by the receive side. The sending side is
pretty well limited by the 'deadline' setting, but the receiving side
has no such precaution, and I have seen per-queue receive latencies of
over 5 milliseconds.
The worrying thing here was that only individual queues have been
affected; most queues had the expected latency of around 50usecs, but
some really went over the top with 1000s of usecs. And these were the
queues which were generating I/O timeouts.
I have now modified the deadline method to cover both receive and
sending side, and the results were pretty good; timeouts are gone and
even the overall performance for the 4 subsystem case has gone up.
Will be posting an updated patchset shortly.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list