[PATCH v1 net-next 05/15] nvme-tcp: Add DDP offload control path
Shai Malin
malin1024 at gmail.com
Tue Dec 15 08:33:50 EST 2020
On 12/14/2020 08:38, Boris Pismenny wrote:
> On 10/12/2020 19:15, Shai Malin wrote:
> > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index c0c33320fe65..ef96e4a02bbd 100644
> > --- a/drivers/nvme/host/tcp.c
> > +++ b/drivers/nvme/host/tcp.c
> > @@ -14,6 +14,7 @@
> > #include <linux/blk-mq.h>
> > #include <crypto/hash.h>
> > #include <net/busy_poll.h>
> > +#include <net/tcp_ddp.h>
> >
> > #include "nvme.h"
> > #include "fabrics.h"
> > @@ -62,6 +63,7 @@ enum nvme_tcp_queue_flags {
> > NVME_TCP_Q_ALLOCATED = 0,
> > NVME_TCP_Q_LIVE = 1,
> > NVME_TCP_Q_POLLING = 2,
> > + NVME_TCP_Q_OFFLOADS = 3,
> > };
> >
> > The same comment from the previous version - we are concerned that perhaps
> > the generic term "offload" for both the transport type (for the Marvell work)
> > and for the DDP and CRC offload queue (for the Mellanox work) may be
> > misleading and confusing to developers and to users.
> >
> > As suggested by Sagi, we can call this NVME_TCP_Q_DDP.
> >
>
> While I don't mind changing the naming here. I wonder why not call the
> toe you use TOE and not TCP_OFFLOAD, and then offload is free for this?
Thanks - please do change the name to NVME_TCP_Q_DDP.
The Marvell nvme-tcp-offload patch series introducing the offloading of both the
TCP as well as the NVMe/TCP layer, therefore it's not TOE.
>
> Moreover, the most common use of offload in the kernel is for partial offloads
> like this one, and not for full offloads (such as toe).
Because each vendor might implement a different partial offload I
suggest naming it
with the specific technique which is used, as was suggested - NVME_TCP_Q_DDP.
More information about the Linux-nvme
mailing list