[PATCH 1/4] nvme-tcp: per-controller I/O workqueues

Sagi Grimberg sagi at grimberg.me
Wed Jul 3 08:16:32 PDT 2024


>>
>>
>> On 03/07/2024 16:50, Hannes Reinecke wrote:
>>> Implement per-controller I/O workqueues to reduce workqueue contention
>>> during I/O.
>>
>> OK, wonder what is the cost here. Is it in ALL conditions better than 
>> a single workqueue?
>
> Well, clearly not on memory-limited systems; a workqueue per 
> controller takes up more memory that a single one. And it's 
> questionable whether
> such a system isn't underprovisioned for nvme anyway.
> We will see a higher scheduler interaction as the scheduler needs to 
> switch between workqueues, but that was kinda the idea. And I doubt one
> can measure it; the overhead between switching workqueues should be 
> pretty much identical to the overhead switching between workqueue items.
>
> I could do some measurements, but really I don't think it'll yield any
> surprising results.

I'm just not used to seeing drivers create non-global workqueues. I've 
seen some filesystems have workqueues per-super, but
it's not a common pattern around the kernel.

Tejun,
Is this a pattern that we should pursue? Do multiple symmetric 
workqueues really work better (faster, with less overhead) than
a single global workqueues?



More information about the Linux-nvme mailing list