[PATCHv2] nvme-tcp: align I/O cpu with blk-mq mapping

Hannes Reinecke hare at suse.de
Mon Jun 24 23:05:15 PDT 2024


On 6/24/24 12:02, Sagi Grimberg wrote:
> 
> 
> On 19/06/2024 18:58, Sagi Grimberg wrote:
>>
>>
>>>> I see how you address multiple controllers falling into the same 
>>>> mappings case in your patch.
>>>> You could have selected a different mq_map entry for each controller 
>>>> (out of the entries that map to the qid).
>>>>
>>> Looked at it, but hadn't any idea how to figure out the load.
>>> The load is actually per-cpu, but we only have per controller 
>>> structures.
>>> So we would need to introduce a per-cpu counter, detailing out the
>>> number of queues scheduled on that CPU.
>>> But that won't help with the CPU oversubscription issue; we still 
>>> might have substantially higher number of overall queues than we have 
>>> CPUs...
>>
>> I think that it would still be better than what you have right now:
>>
>> IIUC Right now you will have for all controllers (based on your example):
>> queue 1: using cpu 6
>> queue 2: using cpu 9
>> queue 3: using cpu 18
>>
>> But selecting a different mq_map entry can give:
>> ctrl1:
>> queue 1: using cpu 6
>> queue 2: using cpu 9
>> queue 3: using cpu 18
>>
>> ctrl2:
>> queue 1: using cpu 7
>> queue 2: using cpu 10
>> queue 3: using cpu 19
>>
>> ctrl3:
>> queue 1: using cpu 8
>> queue 2: using cpu 11
>> queue 3: using cpu 20
>>
>> ctrl4:
>> queue 1: using cpu 54
>> queue 2: using cpu 57
>> queue 3: using cpu 66
>>
>> and so on...
> 
> Hey Hannes,
> 
> Did you make progress with this one?

Yeah, just trying to get some performance numbers.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare at suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich




More information about the Linux-nvme mailing list