[PATCH V4] nvmet-tcp: enable optional queue idle period tracking

Wunderlich, Mark mark.wunderlich at intel.com
Wed Mar 24 17:47:50 GMT 2021


nvmet-tcp: enable optional queue idle period tracking

Add 'idle_poll_period_usecs' option used by io_work() to support
network devices enabled with advanced interrupt moderation
supporting a relaxed interrupt model. It was discovered that
such a NIC used on the target was unable to support initiator
connection establishment, caused by the existing io_work()
flow that immediately exits after a loop with no activity and
does not re-queue itself.

With this new option a queue is assigned a period of time
that no activity must occur in order to become 'idle'.  Until
the queue is idle the work item is requeued.

The new module option is defined as changeable making it
flexible for testing purposes.

The pre-existing legacy behavior is preserved when no module option
for idle_poll_period_usecs is specified.

Signed-off-by: Mark Wunderlich <mark.wunderlich at intel.com>
---
Changes from V3:
 - remove unnecessary added io_work() function variable 'requeue'.
 - instead of checking and clearing poll limits inside new function
   nvmet_tcp_check_queue_deadline, just make call to arm the poll
   limits before call to queue io_work during queue creation. The
   poll limits must be armed before any invocation of io_work() that
   is not as a result of an activity posting or interrupt, else the
   worker will prematurely exit.
---
Changes from V2:
 - pull the tracking code out of io_work() and into two support
   functions.
 - simplify io_work with single test made after process loop to
   determine if optional idle tracking is active or not.
 - the base logic of idle tracking used a condition to re-queue
   worker remains the same.
---
Changes from V1:
 - remove the accounting of time only spent within io_work() that
   was deducted from the assigned idle deadline time period.  This
   simplification improvement just requires the selection of a
   sufficient optional time period that will catch any non-idle
   activity to keep a queue active.
 - testing was performed with a NIC using standard HW interrupt mode, with
   and without the new module option enabled.  No measurable performance
   drop was seen when the patch wsa applied and the new option specified
   or not.  A side effect of a standard NIC using the new option
   will reduce the context switch rate.  We measured a drop from roughly
   90K to less than 300 (for 32 active connections).
 - For a NIC using a passive advanced interrupt moderation policy, it was
   then successfully able to achieve and maintain active connections with
   the target.
---
 drivers/nvme/target/tcp.c |   39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index dc1f0f647189..c44b34f0d46c 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -29,6 +29,16 @@ static int so_priority;
 module_param(so_priority, int, 0644);
 MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority");
 
+/* Define a time period (in usecs) that io_work() shall sample an activated
+ * queue before determining it to be idle.  This optional module behavior
+ * can enable NIC solutions that support socket optimized packet processing
+ * using advanced interrupt moderation techniques.
+ */
+static int idle_poll_period_usecs;
+module_param(idle_poll_period_usecs, int, 0644);
+MODULE_PARM_DESC(idle_poll_period_usecs,
+		"nvmet tcp io_work poll till idle time period in usecs");
+
 #define NVMET_TCP_RECV_BUDGET		8
 #define NVMET_TCP_SEND_BUDGET		8
 #define NVMET_TCP_IO_WORK_BUDGET	64
@@ -119,6 +129,9 @@ struct nvmet_tcp_queue {
 	struct ahash_request	*snd_hash;
 	struct ahash_request	*rcv_hash;
 
+	unsigned long		poll_start;
+	unsigned long		poll_end;
+
 	spinlock_t		state_lock;
 	enum nvmet_tcp_queue_state state;
 
@@ -1198,6 +1211,25 @@ static void nvmet_tcp_schedule_release_queue(struct nvmet_tcp_queue *queue)
 	spin_unlock(&queue->state_lock);
 }
 
+static inline void nvmet_tcp_arm_queue_deadline(struct nvmet_tcp_queue *queue)
+{
+	queue->poll_start = jiffies;
+	queue->poll_end = queue->poll_start +
+			usecs_to_jiffies(idle_poll_period_usecs);
+}
+
+static bool nvmet_tcp_check_queue_deadline(struct nvmet_tcp_queue *queue,
+		int ops)
+{
+	if (!idle_poll_period_usecs)
+		return false;
+
+	if (ops)
+		nvmet_tcp_arm_queue_deadline(queue);
+
+	return time_in_range(jiffies, queue->poll_start, queue->poll_end);
+}
+
 static void nvmet_tcp_io_work(struct work_struct *w)
 {
 	struct nvmet_tcp_queue *queue =
@@ -1223,9 +1255,10 @@ static void nvmet_tcp_io_work(struct work_struct *w)
 	} while (pending && ops < NVMET_TCP_IO_WORK_BUDGET);
 
 	/*
-	 * We exahusted our budget, requeue our selves
+	 * Requeue the worker if idle deadline period is in progress or any
+	 * ops activity was recorded during the do-while loop above.
 	 */
-	if (pending)
+	if (nvmet_tcp_check_queue_deadline(queue, ops) || pending)
 		queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
 }
 
@@ -1525,6 +1558,8 @@ static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
 	if (ret)
 		goto out_destroy_sq;
 
+	if (idle_poll_period_usecs)
+		nvmet_tcp_arm_queue_deadline(queue);
 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
 
 	return 0;



More information about the Linux-nvme mailing list