[PATCH 4/4] nvme: add support for mq_ops->queue_rqs()

Jens Axboe axboe at kernel.dk
Thu Dec 16 08:25:20 PST 2021


On 12/16/21 9:19 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>> +
>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>> should call nvme_sq_copy_cmd().
>>>>> I also noticed that.
>>>>>
>>>>> So need to decide if to open code it or use the helper function.
>>>>>
>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>> Yes agree, that's been my stance too :-)
>>>>
>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>> the performance degration measured on the first try was a measurement
>>>>>> error?
>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>
>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>> algorithm ?
>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>> in total. I do agree that if we ever made it much larger, then we might
>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>> to get enough gain from the batching done in various areas, while still
>>>> not making it so large that we have a potential latency issue. That
>>>> batch count is already used consistently for other items too (like tag
>>>> allocation), so it's not specific to just this one case.
>>> I'm saying that the you can wait to the batch_max_count too long and it
>>> won't be efficient from latency POV.
>>>
>>> So it's better to limit the block layar to wait for the first to come: x
>>> usecs or batch_max_count before issue queue_rqs.
>> There's no waiting specifically for this, it's just based on the plug.
>> We just won't do more than 32 in that plug. This is really just an
>> artifact of the plugging, and if that should be limited based on "max of
>> 32 or xx time", then that should be done there.
>>
>> But in general I think it's saner and enough to just limit the total
>> size. If we spend more than xx usec building up the plug list, we're
>> doing something horribly wrong. That really should not happen with 32
>> requests, and we'll never eg wait on requests if we're out of tags. That
>> will result in a plug flush to begin with.
> 
> I'm not aware of the plug. I hope to get to it soon.
> 
> My concern is if the user application submitted only 28 requests and 
> then you'll wait forever ? or for very long time.
> 
> I guess not, but I'm asking how do you know how to batch and when to 
> stop in case 32 commands won't arrive anytime soon.

The plug is in the stack of the task, so that condition can never
happen. If the application originally asks for 32 but then only submits
28, then once that last one is submitted the plug is flushed and
requests are issued.

>>> Also, This batch is per HW queue or SW queue or the entire request queue ?
>> It's per submitter, so whatever the submitter ends up queueing IO
>> against. In general it'll be per-queue.
> 
> struct request_queue ?
> 
> I think the best is to batch per struct blk_mq_hw_ctx.
> 
> I see that you check this in the nvme_pci driver but shouldn't it go to 
> the block layer ?

That's not how plugging works. In general, unless your task bounces
around, then it'll be a single queue and a single hw queue as well.
Adding code to specifically check the mappings and flush at that point
would be a net loss, rather than just deal with it if it happens for
some cases.

-- 
Jens Axboe




More information about the Linux-nvme mailing list