[PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path
Boris Pismenny
borispismenny at gmail.com
Mon Dec 14 01:38:12 EST 2020
On 10/12/2020 19:15, Shai Malin wrote:
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index c0c33320fe65..ef96e4a02bbd 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -14,6 +14,7 @@
> #include <linux/blk-mq.h>
> #include <crypto/hash.h>
> #include <net/busy_poll.h>
> +#include <net/tcp_ddp.h>
>
> #include "nvme.h"
> #include "fabrics.h"
> @@ -62,6 +63,7 @@ enum nvme_tcp_queue_flags {
> NVME_TCP_Q_ALLOCATED = 0,
> NVME_TCP_Q_LIVE = 1,
> NVME_TCP_Q_POLLING = 2,
> + NVME_TCP_Q_OFFLOADS = 3,
> };
>
> The same comment from the previous version - we are concerned that perhaps
> the generic term "offload" for both the transport type (for the Marvell work)
> and for the DDP and CRC offload queue (for the Mellanox work) may be
> misleading and confusing to developers and to users.
>
> As suggested by Sagi, we can call this NVME_TCP_Q_DDP.
>
While I don't mind changing the naming here. I wonder why not call the
toe you use TOE and not TCP_OFFLOAD, and then offload is free for this?
Moreover, the most common use of offload in the kernel is for partial offloads
like this one, and not for full offloads (such as toe).
More information about the Linux-nvme
mailing list