kernel null pointer at nvme_tcp_init_iter+0x7d/0xd0 [nvme_tcp]

Sagi Grimberg sagi at grimberg.me
Tue Feb 9 05:36:41 EST 2021



On 2/9/21 2:33 AM, Ming Lei wrote:
> On Tue, Feb 09, 2021 at 02:07:15AM -0800, Sagi Grimberg wrote:
>>
>>>>>
>>>>> One obvious error is that nr_segments is computed wrong.
>>>>>
>>>>> Yi, can you try the following patch?
>>>>>
>>>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>>>> index 881d28eb15e9..a393d99b74e1 100644
>>>>> --- a/drivers/nvme/host/tcp.c
>>>>> +++ b/drivers/nvme/host/tcp.c
>>>>> @@ -239,9 +239,14 @@ static void nvme_tcp_init_iter(struct nvme_tcp_request *req,
>>>>>     		offset = 0;
>>>>>     	} else {
>>>>>     		struct bio *bio = req->curr_bio;
>>>>> +		struct bio_vec bv;
>>>>> +		struct bvec_iter iter;
>>>>> +
>>>>> +		nsegs = 0;
>>>>> +		bio_for_each_bvec(bv, bio, iter)
>>>>> +			nsegs++;
>>>>>     		vec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
>>>>> -		nsegs = bio_segments(bio);
>>>>
>>>> This was exactly the patch that caused the issue.
>>>
>>> What was the issue you are talking about? Any link or commit hash?
>>
>> The commit that caused the crash is:
>> 0dc9edaf80ea nvme-tcp: pass multipage bvec to request iov_iter
> 
> Not found this commit in linus tree, :-(

The original report is on:
Kernel repo: 
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
Commit: 11f8b6fd0db9 - Merge branch 'for-5.12/io_uring' into for-next

>>> nvme-tcp builds iov_iter(BVEC) from __bvec_iter_bvec(), the segment
>>> number has to be the actual bvec number. But bio_segment() just returns
>>> number of the single-page segment, which is wrong for iov_iter.
>>
>> That is what I thought, but its causing a crash, and was fine with
>> bio_segments. So I'm trying to understand why is that.
> 
> I tested this patch, and it works just fine.

Me too, but Yi hits this crash, I even recompiled with his
config, but still no luck.

>>> Please see the same usage in lo_rw_aio().
>>
>> nvme-tcp works on the bio basis to avoid bvec allocation
>> in the data path. Hence the iterator is fed directly by
>> the bio bvec and will re-initialize on every bio that
>> is spanned by the request.
> 
> Yeah, I know that. What I meant is that rq_for_each_bvec() is used
> to figure out bvec number in loop, which may feed the bio bvec
> directly to fs via iov_iter too, just similar with nvme-tcp.
> 
> The difference is that loop will switch to allocate a new bvec
> table and copy bios's bvec to the new table in case of bios merge.

So in nvme-tcp used bio_for_each_bvec which seems appropriate, just
need to understand what is causing this.



More information about the Linux-nvme mailing list