[PATCH V2] nvmet-tcp: enable optional queue idle period tracking

Wunderlich, Mark mark.wunderlich at intel.com
Wed Feb 24 13:00:41 EST 2021


Add 'queue idle period' option used by io_work() to support
network devices enabled with advanced interrupt moderation
supporting a relaxed interrupt model. It was discovered that
such a NIC used on the target was unable to support initiator
connection establishment, caused by the existing io_work()
flow that immediately exits and does not re-queue itself at
the first loop with no activity.

With this new option a queue is assigned a period of time
that no activity must occur in order to become 'idle'.  Until
the queue is idle the work item is requeued.

The new module option is defined as changeable making it
flexible for testing purposes.

The pre-existing legacy behavior is preserved when no module option
for queue idle period is specified.

Signed-off-by: Mark Wunderlich <mark.wunderlich at intel.com>
---
V2 of this patch removes the accounting of time deducted from the
idle deadline time period only during io_work activity.  The result
is a more simple solution, only requiring the selection of a
sufficient optional time period that will catch any non-idle activity
to keep a queue active.

Testing was performed with a NIC using standard HW interrupt mode, with
and without the new module option enabled.  No measurable performance
drop was seen when the patch wsa applied and the new option specified
or not.  A side effect of a standard NIC using the new option
will reduce the context switch rate.  We measured a drop from roughly
90K to less than 300 (for 32 active connections).

For a NIC using a passive advanced interrupt moderation policy, it was
then successfully able to achieve and maintain active connections with
the target.
---
 drivers/nvme/target/tcp.c |   55 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 53 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index dc1f0f647189..96b6c28e327b 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -29,6 +29,16 @@ static int so_priority;
 module_param(so_priority, int, 0644);
 MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority");
 
+/* Define a time period (in usecs) that io_work() shall sample an activated
+ * queue before determining it to be idle.  This optional module behavior
+ * can enable NIC solutions that support socket optimized packet processing
+ * using advanced interrupt moderation techniques.
+ */
+static int queue_idle_period_usecs;
+module_param(queue_idle_period_usecs, int, 0644);
+MODULE_PARM_DESC(queue_idle_period_usecs,
+		"nvmet tcp io_work queue idle time period in usecs");
+
 #define NVMET_TCP_RECV_BUDGET		8
 #define NVMET_TCP_SEND_BUDGET		8
 #define NVMET_TCP_IO_WORK_BUDGET	64
@@ -96,6 +106,7 @@ struct nvmet_tcp_queue {
 	struct work_struct	io_work;
 	struct nvmet_cq		nvme_cq;
 	struct nvmet_sq		nvme_sq;
+	unsigned long           idle_poll_deadline;
 
 	/* send state */
 	struct nvmet_tcp_cmd	*cmds;
@@ -1198,6 +1209,25 @@ static void nvmet_tcp_schedule_release_queue(struct nvmet_tcp_queue *queue)
 	spin_unlock(&queue->state_lock);
 }
 
+/*
+ * This worker function will process all send and recv packet
+ * activity for a queue. It will loop on the queue for up to a
+ * given maximum operation budget, or until there is no activity
+ * during a single loop iteration.
+ *
+ * Two exit modes are possible.
+ *
+ * The default 'pending' mode where the worker will re-queue
+ * itself, after exiting the work loop, only if any send or recv
+ * activity was recorded during the last pass within the loop.
+ *
+ * A optional 'idle period' mode where in addition to re-queueing
+ * itself because of activity it also tracks if a queue has not reached an
+ * assigned 'idle' deadline time period. The worker consumes from the assigned
+ * time period, across many potential invocations with no activity, until it
+ * has expired. Any activity during an invocation will trigger a fresh
+ * idle period deadline.
+ */
 static void nvmet_tcp_io_work(struct work_struct *w)
 {
 	struct nvmet_tcp_queue *queue =
@@ -1205,6 +1235,11 @@ static void nvmet_tcp_io_work(struct work_struct *w)
 	bool pending;
 	int ret, ops = 0;
 
+	if (queue_idle_period_usecs && queue->idle_poll_deadline == 0)
+		/* Assign the queue idle period deadline if not already set */
+		queue->idle_poll_deadline =
+			jiffies + usecs_to_jiffies(queue_idle_period_usecs);
+
 	do {
 		pending = false;
 
@@ -1222,8 +1257,24 @@ static void nvmet_tcp_io_work(struct work_struct *w)
 
 	} while (pending && ops < NVMET_TCP_IO_WORK_BUDGET);
 
-	/*
-	 * We exahusted our budget, requeue our selves
+	/* If optional deadline mode active, determine if queue has reached its
+	 * idle process deadline limit.  Any ops activity awards the queue a new
+	 * deadline period.
+	 */
+	if (queue_idle_period_usecs) {
+		/* Clear to award active non-idle queue new period, or
+		 * reset for future queue activity after exit when idle reached.
+		 */
+		if (!time_after(jiffies, queue->idle_poll_deadline)) {
+			pending = true;
+			if (ops > 0)
+				queue->idle_poll_deadline = 0;
+		} else
+			queue->idle_poll_deadline = 0;
+	}
+
+	/* We requeue ourself when pending indicates there was activity
+	 * recorded, or queue has not reached optional idle time period.
 	 */
 	if (pending)
 		queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);



More information about the Linux-nvme mailing list