ioccsz and iorcsz check failing

Sagi Grimberg sagi at grimberg.me
Mon Dec 18 01:59:32 PST 2023



On 12/15/23 20:08, Keith Busch wrote:
> On Fri, Dec 15, 2023 at 03:48:04PM +0100, Daniel Wagner wrote:
>> Since commit 2fcd3ab39826 ("nvme-fabrics: check ioccsz and iorcsz") my
>> testing fails with these checks when trying to connect a remote target
>> (Linux nvmet) via nvme-tcp. Looking at the TCP frame I see these values
>> are 0 on the wire.
>>
>> When running blktests with nvme-tcp via the loop back device all is
>> good.
>>
>> When running blktest with nvme-fc via the loop back device the first
>> check fails because ioccsz is 0. I've added a bunch of debug prints:
>>
>>    nvme nvme0: I/O queue command capsule supported size 0 < 4
>>
>>    nvmet: nvmet_execute_identify:687 cns 1
>>    nvmet: nvmet_execute_identify_ctrl:469 ioccsz 4 iorcsz 1
>>    nvmet: nvmet_copy_to_sgl:98 buf ffff8881348a6000 len 4096
>>
>> So this part looks good to my eyes. Not sure where the problem could be.
>> In case someone spots the problem as I can't really make sense of it at
>> the moment.
> 
> Weird. I optimistically thought I'd find a problem, but nope, I am
> confused. The code looks like it's doing the right thing, and your
> target side prints appear to confirm that, but host sees a different
> result. Is anything else in the identify wrong, or is it just these
> fabrics fields?

I don't see any issue in the code as well.

I'm wondering if this has something to do with MSG_SPLICE_PAGES that
is just now uncovered?

Daniel, can you try to clear MSG_SPLICE_PAGES from nvmet_try_send_data ?



More information about the Linux-nvme mailing list