[linux-nvme:nvme-6.14 2/36] drivers/nvme/host/tcp.c:1583:18: warning: variable 'n' set but not used

Chaitanya Kulkarni chaitanyak at nvidia.com
Wed Jan 8 21:23:24 PST 2025


Sagi,

On 1/8/25 17:36, kernel test robot wrote:
> All warnings (new ones prefixed by >>):
>
>     drivers/nvme/host/tcp.c: In function 'nvme_tcp_set_queue_io_cpu':
>>> drivers/nvme/host/tcp.c:1583:18: warning: variable 'n' set but not used [-Wunused-but-set-variable]
>      1583 |         int cpu, n = 0, min_queues = INT_MAX, io_cpu;
>           |                  ^
> --
>>> drivers/nvme/host/tcp.c:1578: warning: Function parameter or struct member 'queue' not described in 'nvme_tcp_set_queue_io_cpu'
>>> drivers/nvme/host/tcp.c:1578: warning: expecting prototype for Track the number of queues assigned to each cpu using a global per(). Prototype was for nvme_tcp_set_queue_io_cpu() instead

How about something like this ?

 From 8fa23fd86e82664526865c628529b8bce3c413be Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <kch at nvidia.com>
Date: Wed, 8 Jan 2025 20:52:52 -0800
Subject: [PATCH] nvme-tcp: remove unused variable

The variable n declared in nvme_tcp_set_queue_io_cpu() was used in the
original calculation when tcp target was created.

With addtion of commit bd0f5c103101 ("nvme-tcp: Fix I/O queue cpu
spreading for multiple controllers") now all the calculation is based
on io_cpu and mq_map local variable is n is not used.

Remove the local variable and respective calculations.

Signed-off-by: Chaitanya Kulkarni <kch at nvidia.com>
---
  drivers/nvme/host/tcp.c | 14 +++++---------
  1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 544d6aa00cc3..8f21803a5a60 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1579,22 +1579,18 @@ static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
  	struct blk_mq_tag_set *set = &ctrl->tag_set;
  	int qid = nvme_tcp_queue_id(queue) - 1;
  	unsigned int *mq_map;
-	int cpu, n = 0, min_queues = INT_MAX, io_cpu;
+	int cpu, min_queues = INT_MAX, io_cpu;
  
  	if (wq_unbound)
  		goto out;
  
-	if (nvme_tcp_default_queue(queue)) {
+	if (nvme_tcp_default_queue(queue))
  		mq_map = set->map[HCTX_TYPE_DEFAULT].mq_map;
-		n = qid;
-	} else if (nvme_tcp_read_queue(queue)) {
+	else if (nvme_tcp_read_queue(queue))
  		mq_map = set->map[HCTX_TYPE_READ].mq_map;
-		n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT];
-	} else if (nvme_tcp_poll_queue(queue)) {
+	else if (nvme_tcp_poll_queue(queue))
  		mq_map = set->map[HCTX_TYPE_POLL].mq_map;
-		n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] -
-				ctrl->io_queues[HCTX_TYPE_READ];
-	}
+
  	if (WARN_ON(!mq_map))
  		goto out;
  
-- 
2.40.0


-ck




More information about the Linux-nvme mailing list