[PATCH v3 08/20] nvme-tcp-offload: Add IO level implementation
Or Gerlitz
gerlitz.or at gmail.com
Mon Jul 5 02:28:47 PDT 2021
On Sat, Jul 3, 2021 at 2:28 AM Sagi Grimberg <sagi at grimberg.me> wrote:
> >>>> From: Dean Balandin <dbalandin at marvell.com>
> >>>> In this patch, we present the IO level functionality.
> >>> [..]
> >>>
> >>>> +static void nvme_tcp_ofld_set_sg_null(struct nvme_command *c)
> >>>> +{
> >>>> + struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> >>>> + sg->addr = 0;
> >>>
> >>> ok
> >>>
> >>>> + sg->length = 0;
> >>>> + sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
> >>>> + NVME_SGL_FMT_TRANSPORT_A;
> >>>
> >>>> +inline void nvme_tcp_ofld_set_sg_inline(struct nvme_tcp_ofld_queue *queue,
> >>>> + struct nvme_command *c, u32 data_len)
> >>>> +{
> >>>> + struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> >>>> + sg->addr = cpu_to_le64(queue->ctrl->nctrl.icdoff);
> >>>
> >>> ok, what about dma mapping of the address?
> >>
> >> The dma mapping is done by the offload device driver.
> >> patch 18 - "qedn: Add IO level fastpath functionality", in qedn_init_sgl().
> >
> > The dma mapping can and should be done by the nvme/tcp driver. E.g in a similar
> > manner fow how it's done for nvme/rdma, you can take a look on the code there.
>
> I agree here, the fact that the lld calls blk_rq_map_sg is very much
> backwards... If the lld is peeking into a block layer request or bio,
> its a sign that the layering is wrong...
Making things more precise while looking in the patch set we were
running here over
the last year, blk_rq_map_sg indeed is called by the nvme driver, and then
dma_map_sg by the hw driver. This is slightly different from the rdma case.
overall, the sequence of calls is the following
sg_alloc_table_chained()
blk_rq_map_sg()
dma_map_sg()
More information about the Linux-nvme
mailing list