[PATCH 01/10] nvmet: zeroout id-ns buffer for invalid nsid

Chaitanya Kulkarni Chaitanya.Kulkarni at wdc.com
Tue Feb 2 23:31:06 EST 2021


On 2/2/21 01:38, Christoph Hellwig wrote:
> On Tue, Feb 02, 2021 at 12:12:33AM +0000, Chaitanya Kulkarni wrote:
>>>> +	req->ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
>>>> +	if (!req->ns) {
>>>> +		status = NVME_SC_INVALID_NS;
>>>> +		/*
>>>> +		 * According to spec : If the specified namespace is
>>>> +		 * an unallocated NSID then the controller returns a zero filled
>>>> +		 * data structure. Also don't override the error status as invalid
>>>> +		 * namespace takes priority over the failed zeroout buffer case.
>>>> +		 */
>>>> +		nvmet_zero_sgl(req, 0, sizeof(*id));
>>>> +		goto out;
>>>> +	}
>>>> +
>>>>  	id = kzalloc(sizeof(*id), GFP_KERNEL);
>>>>  	if (!id) {
>>>>  		status = NVME_SC_INTERNAL;
>>>>  		goto out;
>>>>  	}
>>>>  
>>>> -	/* return an all zeroed buffer if we can't find an active namespace */
>>>> -	req->ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid);
>>>> -	if (!req->ns) {
>>>> -		status = NVME_SC_INVALID_NS;
>>>> -		goto done;
>>> I think all we need to do is to remove the status assignment here.
>>> While you're patch avoids the memory allocation for this case, it isn't
>>> really the fast path so I'd rather avoid the extra code.
>>>
>> Sorry, I didn't understand this comment. If we remove the status assignment
>> then host will not get the right error. That is the bug we fixed it
>> initially
>> with bffcd507780e.
> Actually, looking at it, bffcd507780e is wrong.  Identify Namespace
> to a namespace that is not active should return a status of 0 and
> a zeroed structure.

I didn't find anything in the spec that says we need to return the
the status of 0 for unallocated [1] NSID.

I've checked with two different PCIe controllers for id-ns with invalid
nsid they both returned NVME_SC_INVALID_NS and so does QEMU :-

PCIe:-

# nvme list | tr -s ' ' ' '
Node SN Model Namespace Usage Format FW Rev
/dev/nvme0n1 191515805865 XXXXXXXXXXX-00SJG0 1 250.06 GB / 250.06 GB 512
B + 0 B 102000XX
/dev/nvme1n1 191912800089 XXXXXXXXXXX-00SJG0 1 2.00 TB / 2.00 TB 512 B +
0 B 102430XX

# nvme id-ns /dev/nvme0n1 -n 100
NVMe Status:INVALID_NS(400b) NSID:100

# nvme id-ns /dev/nvme1n1 -n 100
NVMe Status:INVALID_NS(400b) NSID:100

QEMU:-

# nvme list | tr -s ' ' ' '
Node SN Model Namespace Usage Format FW Rev
/dev/nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
# nvme id-ns /dev/nvme0 -n 100
NVMe status: INVALID_NS: The namespace or the format of that namespace
is invalid(0x400b)

Without bffcd507780e targetwas returning success and zeroed buffer [2].
Now with bffcd507780etarget is returning NVME_SC_INVALID but the buffer
is not
zeroed (which was there in original patch [3] then we changed to not copy
buffer [4]). Above call to nvmet_zero_sgl() in this patch fixes that.

Please correct me if I missed something.
>> Regarding fast path, if system is under pressure even single memory
>> allocation
>> can be costly, especially when host tries to do read id-ns, is there any
>> reason why we should not consider this scenario ?
> No sensible host gets there.  I'd rather keep the code simple.
>

[1] Unallocated NSIDs do not refer to any namespaces that exist in the
NVM subsystem.
[2] http://lists.infradead.org/pipermail/linux-nvme/2021-January/021954.html
[3] http://lists.infradead.org/pipermail/linux-nvme/2021-January/021954.html
[4] http://lists.infradead.org/pipermail/linux-nvme/2021-January/021979.html



More information about the Linux-nvme mailing list