[PATCH 13/14] nvmet: use minimized version of blk_rq_append_bio

Logan Gunthorpe logang at deltatee.com
Mon Aug 10 18:41:21 EDT 2020



On 2020-08-10 12:54 p.m., Chaitanya Kulkarni wrote:
> The function blk_rq_append_bio() is a genereric API written for all
> types driver (having bounce buffers) and different context (where
> request is already having a bio i.e. rq->bio != NULL).
> 
> It does mainly three things: calculating the segments, bounce queue and
> if req->bio == NULL call blk_rq_bio_prep() or handle low level merge()
> case.
> 
> The NVMe PCIe driver does not use the queue bounce mechanism. In order
> to find this out for each request processing in the passthru
> blk_rq_append_bio() does extra work in the fast path for each request.
> 
> When I ran I/Os with different block sizes on the passthru controller
> I found that we can reuse the req->sg_cnt instead of iterating over the
> bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in
> blk_rq_append_bio() is a duplication of work given that we have the
> value in req->sg_cnt. (correct me here if I'm wrong).
> 
> With NVMe passthru request based driver we allocate fresh request each
> time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e.
> we don't really need the second condition in the blk_rq_append_bio()
> and the resulting error condition in the caller of blk_rq_append_bio().
> 
> So for NVMeOF passthru driver recalculating the segments, bounce check
> and ll_back_merge_code is not needed such that we can get away with the
> minimal version of the blk_rq_append_bio() which removes the error check
> in the fast path along with extra variable in nvmet_passthru_map_sg().
> 
> This patch updates the nvmet_passthru_map_sg() such that it does only
> appending the bio to the request in the context of the NVMeOF Passthru
> driver. Following are perf numbers :-
> 
> With current implementation (blk_rq_append_bio()) :-
> ----------------------------------------------------
> +    5.80%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    4.88%     0.00%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    4.86%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    5.17%     0.00%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
> 
> With this patch :-
> ----------------------------------------------------
> +    3.14%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    3.26%     0.01%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    5.37%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    5.18%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    4.84%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> +    4.87%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>

Looks good to me. For this patch and the previous:

Reviewed-by: Logan Gunthorpe <logang at deltatee.com>



More information about the Linux-nvme mailing list