[PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path

Shai Malin smalin at marvell.com
Thu Dec 10 12:15:30 EST 2020


diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index c0c33320fe65..ef96e4a02bbd 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -14,6 +14,7 @@
 #include <linux/blk-mq.h>
 #include <crypto/hash.h>
 #include <net/busy_poll.h>
+#include <net/tcp_ddp.h>
 
 #include "nvme.h"
 #include "fabrics.h"
@@ -62,6 +63,7 @@ enum nvme_tcp_queue_flags {
 	NVME_TCP_Q_ALLOCATED	= 0,
 	NVME_TCP_Q_LIVE		= 1,
 	NVME_TCP_Q_POLLING	= 2,
+	NVME_TCP_Q_OFFLOADS     = 3,
 };

The same comment from the previous version - we are concerned that perhaps 
the generic term "offload" for both the transport type (for the Marvell work) 
and for the DDP and CRC offload queue (for the Mellanox work) may be 
misleading and confusing to developers and to users.

As suggested by Sagi, we can call this NVME_TCP_Q_DDP. 



More information about the Linux-nvme mailing list