Flush warning
Hal Rosenstock
hal at dev.mellanox.co.il
Mon Aug 14 05:13:04 PDT 2017
On 8/13/2017 6:33 AM, Leon Romanovsky wrote:
> On Sun, Aug 13, 2017 at 02:14:58AM -0700, Sagi Grimberg wrote:
>>
>>>>>> Does anyone else know?
>>>>>
>>>>> Consider that the ib_core can be used to back storage. Ie consider a
>>>>> situation where iSER/NFS/SRP needs to reconnect to respond to kernel
>>>>> paging/reclaim.
>>>>>
>>>>> On the surface it seems reasonable to me that these are on a reclaim
>>>>> path?
>>
>> I'm pretty sure that ULP connect will trigger memory allocations, which
>> will fail under memory pressure... Maybe I'm missing something.
>>
>>>> hmm. That seems reasonable. Then I would think the nvme_rdma would also need
>>>> to be using a reclaim workqueue.
>>>>
>>>> Sagi, Do you think I should add a private workqueue with WQ_MEM_RECLAIM to
>>>> nvme_rdma vs using the system_wq? nvme/target probably needs one also...
>>
>> I'm not sure, being unable to flush system workqueue from CM context is
>> somewhat limiting... We could use a private workqueue for nvmet
>> teardowns but I'm not sure we want to do that.
>>
>>> The workqueue which frees the memory and doesn't allocate memory during execution
>>> is supposed to be marked as WQ_MEM_RECLAIM. This flag will cause to priority increase
>>> for such workqueue during low memory conditions.
>>
>> Which to my understanding means that CM workqueue should not use it as
>> on each CM connect, by definition the ULP allocates memory (qp, cq etc).
>
> From my understanding too.
> That workqueue was introduced in 2005, in a977049dacde
> ("[PATCH] IB: Add the kernel CM implementation"), it is not clear if it
> was intentionally.
>
> Hal,
> do you remember the rationale there?
Sean is best to respond to this.
-- Hal
>
> Thanks
>
More information about the Linux-nvme
mailing list