[PATCH] nvme-rdma: fix in-casule data send for chained sgls

Chao Leng lengchao at huawei.com
Thu May 27 18:08:13 PDT 2021



On 2021/5/28 4:40, Sagi Grimberg wrote:
> We have only 2 inline sg entries and we allow 4 sg entries for the send
> wr sge. Larger sgls entries will be chained. However when we build
> in-capsule send wr sge, we iterate without taking into account that the
> sgl may be chained and still fit in-capsule (which can happen if the sgl
> is bigger than 2, but lower-equal to 4).
> 
> Fix in-capsule data mapping to correctly iterate chained sgls.
> 
> Reported-by: Walker, Benjamin <benjamin.walker at intel.com>
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>   drivers/nvme/host/rdma.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index 8d107b201f16..ed1bf214c544 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -1320,16 +1320,16 @@ static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
>   		int count)
>   {
>   	struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
> -	struct scatterlist *sgl = req->data_sgl.sg_table.sgl;
sgl need to be defined.
>   	struct ib_sge *sge = &req->sge[1];
>   	u32 len = 0;
>   	int i;
>   
> -	for (i = 0; i < count; i++, sgl++, sge++) {
> +	for_each_sg(req->data_sgl.sg_table.sgl, sgl, count, i) {
>   		sge->addr = sg_dma_address(sgl);
>   		sge->length = sg_dma_len(sgl);
>   		sge->lkey = queue->device->pd->local_dma_lkey;
>   		len += sge->length;
> +		sge++;
>   	}
>   
>   	sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);
> 



More information about the Linux-nvme mailing list