[PATCH v3 08/20] nvme-tcp-offload: Add IO level implementation

Sagi Grimberg sagi at grimberg.me
Fri Jul 2 16:28:46 PDT 2021


>>>> From: Dean Balandin <dbalandin at marvell.com>
>>>> In this patch, we present the IO level functionality.
>>> [..]
>>>
>>>> +static void nvme_tcp_ofld_set_sg_null(struct nvme_command *c)
>>>> +{
>>>> +       struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
>>>> +       sg->addr = 0;
>>>
>>> ok
>>>
>>>> +       sg->length = 0;
>>>> +       sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
>>>> +                       NVME_SGL_FMT_TRANSPORT_A;
>>>
>>>> +inline void nvme_tcp_ofld_set_sg_inline(struct nvme_tcp_ofld_queue *queue,
>>>> +                                       struct nvme_command *c, u32 data_len)
>>>> +{
>>>> +       struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
>>>> +       sg->addr = cpu_to_le64(queue->ctrl->nctrl.icdoff);
>>>
>>> ok, what about dma mapping of the address?
>>
>> The dma mapping is done by the offload device driver.
>> patch 18 - "qedn: Add IO level fastpath functionality", in qedn_init_sgl().
> 
> The dma mapping can and should be done by the nvme/tcp driver. E.g in a similar
> manner fow how it's done for nvme/rdma, you can take a look on the code there.

I agree here, the fact that the lld calls blk_rq_map_sg is very much
backwards... If the lld is peeking into a block layer request or bio,
its a sign that the layering is wrong...



More information about the Linux-nvme mailing list