[PATCH v3 08/20] nvme-tcp-offload: Add IO level implementation

Or Gerlitz gerlitz.or at gmail.com
Thu Jul 1 09:07:06 PDT 2021


On Mon, Jun 28, 2021 at 1:42 PM Shai Malin <malin1024 at gmail.com> wrote:
>
> On Mon, 28 Jun 2021 at 10:10, Or Gerlitz <gerlitz.or at gmail.com> wrote:
> > On Thu, Jun 24, 2021 at 8:41 PM Shai Malin <smalin at marvell.com> wrote:
> > >
> > > From: Dean Balandin <dbalandin at marvell.com>
> > > In this patch, we present the IO level functionality.
> > [..]
> >
> > > +static void nvme_tcp_ofld_set_sg_null(struct nvme_command *c)
> > > +{
> > > +       struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> > > +       sg->addr = 0;
> >
> > ok
> >
> > > +       sg->length = 0;
> > > +       sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
> > > +                       NVME_SGL_FMT_TRANSPORT_A;
> >
> > > +inline void nvme_tcp_ofld_set_sg_inline(struct nvme_tcp_ofld_queue *queue,
> > > +                                       struct nvme_command *c, u32 data_len)
> > > +{
> > > +       struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> > > +       sg->addr = cpu_to_le64(queue->ctrl->nctrl.icdoff);
> >
> > ok, what about dma mapping of the address?
>
> The dma mapping is done by the offload device driver.
> patch 18 - "qedn: Add IO level fastpath functionality", in qedn_init_sgl().

The dma mapping can and should be done by the nvme/tcp driver. E.g in a similar
manner fow how it's done for nvme/rdma, you can take a look on the code there.

>
> >
> > > +       sg->length = cpu_to_le32(data_len);
> > > +       sg->type = (NVME_SGL_FMT_DATA_DESC << 4) | NVME_SGL_FMT_OFFSET;
> >
> > > +static void nvme_tcp_ofld_map_data(struct nvme_command *c, u32 data_len)
> > > +{
> > > +       struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> > > +
> > > +       sg->addr = 0;
> >
> > ???
>
> We will rename the function: nvme_tcp_ofld_set_sg_host_data().
> The dma mapping is done by the offload device driver.

same comment as above



More information about the Linux-nvme mailing list