[PATCHv2] nvme-tcp: align I/O cpu with blk-mq mapping

Sagi Grimberg sagi at grimberg.me
Mon Jun 24 23:51:54 PDT 2024



On 25/06/2024 9:05, Hannes Reinecke wrote:
> On 6/24/24 12:02, Sagi Grimberg wrote:
>>
>>
>> On 19/06/2024 18:58, Sagi Grimberg wrote:
>>>
>>>
>>>>> I see how you address multiple controllers falling into the same 
>>>>> mappings case in your patch.
>>>>> You could have selected a different mq_map entry for each 
>>>>> controller (out of the entries that map to the qid).
>>>>>
>>>> Looked at it, but hadn't any idea how to figure out the load.
>>>> The load is actually per-cpu, but we only have per controller 
>>>> structures.
>>>> So we would need to introduce a per-cpu counter, detailing out the
>>>> number of queues scheduled on that CPU.
>>>> But that won't help with the CPU oversubscription issue; we still 
>>>> might have substantially higher number of overall queues than we 
>>>> have CPUs...
>>>
>>> I think that it would still be better than what you have right now:
>>>
>>> IIUC Right now you will have for all controllers (based on your 
>>> example):
>>> queue 1: using cpu 6
>>> queue 2: using cpu 9
>>> queue 3: using cpu 18
>>>
>>> But selecting a different mq_map entry can give:
>>> ctrl1:
>>> queue 1: using cpu 6
>>> queue 2: using cpu 9
>>> queue 3: using cpu 18
>>>
>>> ctrl2:
>>> queue 1: using cpu 7
>>> queue 2: using cpu 10
>>> queue 3: using cpu 19
>>>
>>> ctrl3:
>>> queue 1: using cpu 8
>>> queue 2: using cpu 11
>>> queue 3: using cpu 20
>>>
>>> ctrl4:
>>> queue 1: using cpu 54
>>> queue 2: using cpu 57
>>> queue 3: using cpu 66
>>>
>>> and so on...
>>
>> Hey Hannes,
>>
>> Did you make progress with this one?
>
> Yeah, just trying to get some performance numbers.

Nice.

I understand that you took my comment above about spreading the
queue->io_cpu assignments to different cpus from the mq_map for
different controllers?

Plus, it would be good to understand the performance implications without
the softirq_rx patch, to see the performance implications of this patch 
alone.

Aside from that, I'll be also interested to see the performance 
implications of
the softirq_rx change.

>
> Cheers,
>
> Hannes




More information about the Linux-nvme mailing list