[PATCH 2/4] nvme-tcp: align I/O cpu with blk-mq mapping
Hannes Reinecke
hare at suse.de
Wed Jul 3 23:43:13 PDT 2024
On 7/3/24 21:38, Sagi Grimberg wrote:
[ .. ]
>>>
>>> We should make the io_cpu come from blk-mq hctx mapping by default,
>>> and for every controller it should use a different cpu from the hctx
>>> mapping. That is the default behavior. in the wq_unbound case, we
>>> skip all of that and make io_cpu = WORK_CPU_UNBOUND, as it was before.
>>>
>>> I'm not sure I follow your logic.
>>>
>> Hehe. That's quite simple: there is none :-)
>> I have been tinkering with that approach in the last weeks, but got
>> consistently _worse_ results than with the original implementation.
>> So I gave up on trying to make that the default.
>
> What is the "original implementation" ?
nvme-6.10
> What is you target? nvmet?
nvmet with brd backend
> What is the fio job file you are using?
tiobench-example.fio from the fio samples
> what is the queue count? controller count?
96 queues, 4 subsystems, 2 controller each.
> What was the queue mapping?
>
queue 0-5 maps to cpu 6-11
queue 6-11 maps to cpu 54-59
queue 12-17 maps to cpu 18-23
queue 18-23 maps to cpu 66-71
queue 24-29 maps to cpu 24-29
queue 30-35 maps to cpu 72-77
queue 36-41 maps to cpu 30-35
queue 42-47 maps to cpu 78-83
queue 48-53 maps to cpu 36-41
queue 54-59 maps to cpu 84-89
queue 60-65 maps to cpu 42-47
queue 66-71 maps to cpu 90-95
queue 72-77 maps to cpu 12-17
queue 78-83 maps to cpu 60-65
queue 84-89 maps to cpu 0-5
queue 90-95 maps to cpu 48-53
> Please lets NOT condition any of this on wq_unbound option at this
> point. This modparam was introduced to address
> a specific issue. If we see IO timeouts, we should fix them, not tell
> people to filp a modparam as a solution.
>
Thing is, there is no 'best' solution. The current implementation is
actually quite good in the single subsystem case. Issues start to appear
when doing performance testing with a really high load.
Reason for this is a high contention on the per-cpu workqueues, which
are simply overwhelmed by doing I/O _and_ servicing 'normal' OS workload
like writing do disk etc.
Switching to wq_unbound reduces the contention and makes the system to
scale better, but that scaling leads to a performance regression for
the single subsystem case.
(See my other mail for performance numbers)
So what is 'better'?
>>
>>>>
>>>> And it makes the 'CPU hogged' messages go away, which is a bonus in
>>>> itself...
>>>
>>> Which messages? aren't these messages saying that the work spent too
>>> much time? why are you describing the case where the work does not get
>>> cpu quota to run?
>>
>> I means these messages:
>>
>> workqueue: nvme_tcp_io_work [nvme_tcp] hogged CPU for >10000us 32771
>> times, consider switching to WQ_UNBOUND
>
> That means that we are spending too much time in io_work, This is a
> separate bug. If you look at nvme_tcp_io_work it has
> a stop condition after 1 millisecond. However, when we call
> nvme_tcp_try_recv() it just keeps receiving from the socket until
> the socket receive buffer has no more payload. So in theory nothing
> prevents from the io_work from looping there forever.
>
Oh, no. It's not the loop which is the problem. It's the actual sending
which takes long; in my test runs I've seen about 250 requests timing
out, the majority of which was still pending on the send_list.
So the io_work function wasn't even running to fetch the requests off
the list.
> This is indeed a bug that we need to address. Probably by setting
> rd_desc.count to some limit, decrement it for every
> skb that we consume, and if we reach that limit and there are more skbs
> pending, we break and self-requeue.
>
> If we indeed spend much time processing a single queue in io_work, it is
> possible that we have a starvation problem
> that is escalating to the timeouts you are seeing.
>
See above; this is the problem. Most of the requests are still stuck on
the send_list (with some even still on the req_list) when timeouts
occur. This means the io_work function is not being scheduled fast
enough (or often enough) to fetch the requests from the list.
My theory here is that this is due to us using bound workqueues;
each workqueue function has to execute on a given cpu, and we can
only schedule one io_work function per cpu. So if that cpu is busy
(with receiving packets, say, or normal OS tasks) we cannot execute,
and we're seeing a starvation.
With wq_unbound we are _not_ tied to a specific cpu, but rather
scheduled in a round-robin fashion. This avoids the starvation
and hence the I/O timeouts do not occur.
But we need to set the 'cpu' affinity for wq_unbound to keep
the cache locality, otherwise the performance _really_ suffers
as we're bouncing threads all over the place.
>>
>> which I get consistently during testing with the default implementation.
>
> Hannes, let's please separate this specific issue with the performance
> enhancements.
> I do not think that we should search for performance enhancements to
> address what appears to be a logical starvation issue.
I am perfectly fine with that approach. This patchset is indeed just to
address the I/O timeout issues I've been seeing.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list