[PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()

Baolin Wang baolin.wang at linux.alibaba.com
Mon Aug 31 21:56:59 EDT 2020


Hi Chaitanya,

On Mon, Aug 31, 2020 at 03:26:56PM -0700, Chaitanya Kulkarni wrote:
> The function nvme_alloc_request() is called from different context, The
> only place where it uses non NVME_QID_ANY value is for fabrics connect
> commands :-
> 
> nvme_submit_sync_cmd()		NVME_QID_ANY
> nvme_features()			NVME_QID_ANY
> nvme_sec_submit()		NVME_QID_ANY
> nvmf_reg_read32()		NVME_QID_ANY
> nvmf_reg_read64()		NVME_QID_ANY
> nvmf_reg_write32()		NVME_QID_ANY
> nvmf_connect_admin_queue()	NVME_QID_ANY
> nvmf_connect_io_queue() 	QID
> 	__nvme_submit_sync_cmd()
> 		nvme_alloc_request()
> 
> nvme_submit_user_cmd()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_keep_alive()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_timeout()			NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_delete_queue()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvmet_passthru_execute_cmd()	NVME_QID_ANY
> 	nvme_alloc_request()
> 
> With passthru nvme_alloc_request now falls into the I/O fast path.
> Annotate the case for block layer request allocation with likely and
> error condition with unlikely.
> 
> The only case this annotation will break for fabrics connect commands
> which are low frequency commands anyway.
> 
> This leads to following performance change in the passthru code :-
> 
> Bandwidth up (better) by 1.7 MB/s :-
> -----------------------------------------------------------------------
> * Average Bandwidth nvme-alloc-default: 272.10000 MB
> * Average Bandwidth nvme-alloc-likely:  273.80000 MB
> 
> Latency down (better) by ~5 usec :-
> -----------------------------------------------------------------------
> * Average Latency util nvme-alloc-default: 917.90300 (usec)
> * Average Latency util nvme-alloc-likely:  912.57100 (usec)
> 
> nvme-alloc-default.fio.10.log:read: IOPS=69.2k, BW=270MiB/s
> nvme-alloc-default.fio.1.log: read: IOPS=69.6k, BW=272MiB/s
> nvme-alloc-default.fio.2.log: read: IOPS=71.1k, BW=278MiB/s
> nvme-alloc-default.fio.3.log: read: IOPS=70.8k, BW=276MiB/s
> nvme-alloc-default.fio.4.log: read: IOPS=70.3k, BW=275MiB/s
> nvme-alloc-default.fio.5.log: read: IOPS=69.8k, BW=273MiB/s
> nvme-alloc-default.fio.6.log: read: IOPS=67.5k, BW=264MiB/s
> nvme-alloc-default.fio.7.log: read: IOPS=67.0k, BW=266MiB/s
> nvme-alloc-default.fio.8.log: read: IOPS=69.5k, BW=271MiB/s
> nvme-alloc-default.fio.9.log: read: IOPS=70.7k, BW=276MiB/s
> 
> nvme-alloc-likely.fio.10.log: read: IOPS=69.5k, BW=272MiB/s
> nvme-alloc-likely.fio.1.log:  read: IOPS=70.1k, BW=274MiB/s
> nvme-alloc-likely.fio.2.log:  read: IOPS=68.4k, BW=267MiB/s
> nvme-alloc-likely.fio.3.log:  read: IOPS=69.3k, BW=271MiB/s
> nvme-alloc-likely.fio.4.log:  read: IOPS=69.9k, BW=273MiB/s
> nvme-alloc-likely.fio.5.log:  read: IOPS=70.2k, BW=274MiB/s
> nvme-alloc-likely.fio.6.log:  read: IOPS=71.6k, BW=280MiB/s
> nvme-alloc-likely.fio.7.log:  read: IOPS=70.5k, BW=276MiB/s
> nvme-alloc-likely.fio.8.log:  read: IOPS=70.6k, BW=276MiB/s
> nvme-alloc-likely.fio.9.log:  read: IOPS=70.3k, BW=275MiB/s
> 
> nvme-alloc-default.fio.10.log:lat (usec): avg=924.12
> nvme-alloc-default.fio.1.log: lat (usec): avg=917.80
> nvme-alloc-default.fio.2.log: lat (usec): avg=898.58
> nvme-alloc-default.fio.3.log: lat (usec): avg=903.38
> nvme-alloc-default.fio.4.log: lat (usec): avg=908.74
> nvme-alloc-default.fio.5.log: lat (usec): avg=915.63
> nvme-alloc-default.fio.6.log: lat (usec): avg=946.41
> nvme-alloc-default.fio.7.log: lat (usec): avg=940.14
> nvme-alloc-default.fio.8.log: lat (usec): avg=920.30
> nvme-alloc-default.fio.9.log: lat (usec): avg=903.93
> 
> nvme-alloc-likely.fio.10.log: lat (usec): avg=919.55
> nvme-alloc-likely.fio.1.log:  lat (usec): avg=912.23
> nvme-alloc-likely.fio.2.log:  lat (usec): avg=934.59
> nvme-alloc-likely.fio.3.log:  lat (usec): avg=921.68
> nvme-alloc-likely.fio.4.log:  lat (usec): avg=914.25
> nvme-alloc-likely.fio.5.log:  lat (usec): avg=910.31
> nvme-alloc-likely.fio.6.log:  lat (usec): avg=892.22
> nvme-alloc-likely.fio.7.log:  lat (usec): avg=906.25
> nvme-alloc-likely.fio.8.log:  lat (usec): avg=905.41
> nvme-alloc-likely.fio.9.log:  lat (usec): avg=909.22
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
> ---
>  drivers/nvme/host/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5702a3843746..a62fdcbfd1cc 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
>  	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>  	struct request *req;
>  
> -	if (qid == NVME_QID_ANY) {
> +	if (unlikely(qid == NVME_QID_ANY)) {

>From your commit message, this should be likely(), right?

>  		req = blk_mq_alloc_request(q, op, flags);
>  	} else {
>  		req = blk_mq_alloc_request_hctx(q, op, flags,
> -- 
> 2.22.1
> 
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme



More information about the Linux-nvme mailing list