[PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
Ping Gan
jacky_gam_2001 at 163.com
Fri Jul 19 01:49:52 PDT 2024
> On 7/19/24 10:07, Ping Gan wrote:
>>> On 7/19/24 07:31, Christoph Hellwig wrote:
>>>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>>>> TCP use bounded workqueue to handle IO, but when there is other
>>>>> high
>>>>> workload on the system(eg: kubernetes), the competition between
>>>>> the
>>>>> bounded kworker and other workload is very radical. To decrease
>>>>> the
>>>>> resource race of OS among them, this patchset will enable
>>>>> unbounded
>>>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>>>> get some performance improvement. And this patchset bases on
>>>>> previous
>>>>> discussion from below session.
>>>>
>>>> So why aren't we using unbound workqueues by default? Who makea
>>>> the
>>>> policy decision and how does anyone know which one to chose?
>>>>
>>> I'd be happy to switch to unbound workqueues per default.
>>> It actually might be a left over from the various workqueue changes;
>>> at one point 'unbound' meant that effectively only one CPU was used
>>> for the workqueue, and you had to remove the 'unbound' parameter to
>>> have the workqueue run on all CPUs. That has since changed, so I
>>> guess
>>> switching to unbound per default is the better option here.
>>
>> I don't fully understand what you said 'by default'. Did you mean we
>> should just remove 'unbounded' parameter and create workqueue by
>> WQ_UNBOUND flag or besides that, we should also add other parameter
>> to switch 'unbounded' workqueue to 'bounded' workqueue?
>>
> The former. Just remove the 'unbounded' parameter and always us
> 'WQ_UNBOUND' flag when creating workqueues.
Okay, will do in next version
Thanks,
Ping
More information about the Linux-nvme
mailing list