[PATCH 4/4] nvme-tcp: switch to 'cpu' affinity scope for unbound workqueues

Sagi Grimberg sagi at grimberg.me
Wed Jul 3 08:09:41 PDT 2024



On 03/07/2024 18:01, Hannes Reinecke wrote:
> On 7/3/24 16:22, Sagi Grimberg wrote:
>>
>>
>> On 03/07/2024 16:50, Hannes Reinecke wrote:
>>> We should switch to the 'cpu' affinity scope when using the 
>>> 'wq_unbound'
>>> parameter as this allows us to keep I/O locality and improve 
>>> performance.
>>
>> Can you please describe more why this is better? locality between what?
>>
> Well; the default unbound scope is 'cache', which groups the cpu 
> according to the cache hierarchy. I want the cpu locality of the 
> workqueue items to be preserved as much as possible, so I switched
> to 'cpu' here.
>
> I'll get some performance numbers.
>
>> While you mention in your cover letter "comments and reviews are 
>> welcome"
>> The change logs in your patches are not designed to assist your 
>> reviewer.
>
> I spent the last few weeks trying to come up with a solution based on my
> original submission, but in the end I gave up as I hadn't been able to
> fix the original issue.

Well, the last submission was a discombobulated set of mostly unrelated 
patches...
What was it that did not work?

> This here is a different approach by massaging the 'wq_unbound' 
> mechanism, which is not only easier but also has the big advantage that
> it actually works :-)
> So I did not include a changlog to the previous patchset as this is a 
> pretty different approach.
> Sorry if this is confusing.

It's just difficult to try and understand what each patch contributes, 
and most of the time the patches
are under-documented. I want to see the improvements added, but I also 
want them to be properly reviewed.



More information about the Linux-nvme mailing list