[Bug Report] nvme-cli commands fails to open head disk node and print error

Nilay Shroff nilay at linux.ibm.com
Wed Mar 27 23:30:07 PDT 2024


Hi,

We observed that nvme-cli commands (nvme list, nvme list-subsys, nvme show topology etc.) print error message prior to printing the actual output.

Notes and observations:
======================-
This issue is observed on the latest linus kernel tree (v6.9-rc1). This was working well in kernel v6.8.

Test details:
=============
I have an NVMe disk which has two controllers, two namespaces and it's multipath capable:

# nvme list-ns /dev/nvme0 
[   0]:0x1
[   1]:0x3

One of namespaces has zero disk capacity:

# nvme id-ns /dev/nvme0 -n 0x3
NVME Identify Namespace 3:
nsze    : 0
ncap    : 0
nuse    : 0
nsfeat  : 0x14
nlbaf   : 4
flbas   : 0
<snip>

Another namespace has non-zero disk capacity:

# nvme id-ns /dev/nvme0 -n 0x1 
NVME Identify Namespace 1:
nsze    : 0x156d56
ncap    : 0x156d56
nuse    : 0
nsfeat  : 0x14
nlbaf   : 4
flbas   : 0
<snip>
 
6.8 kernel:
----------

# nvme list -v 

Subsystem        Subsystem-NQN                                                                                    Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys0     nqn.2019-10.com.kioxia:KCM7DRUG1T92:3D60A04906N1                                                 nvme0, nvme2

Device   SN                   MN                                       FR       TxPort Asdress        Slot   Subsystem    Namespaces      
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme0    3D60A04906N1         1.6TB NVMe Gen4 U.2 SSD IV               REV.CAS2 pcie   0524:28:00.0          nvme-subsys0 nvme0n1
nvme2    3D60A04906N1         1.6TB NVMe Gen4 U.2 SSD IV               REV.CAS2 pcie   0584:28:00.0          nvme-subsys0 

Device       Generic      NSID       Usage                      Format           Controllers     
------------ ------------ ---------- -------------------------- ---------------- ----------------
/dev/nvme0n1 /dev/ng0n1   0x1          0.00   B /   5.75  GB      4 KiB +  0 B   nvme0

As we can see above the namespace (0x3) with zero disk capacity is not listed in the output.
Furthermore, we don't create head disk node (i.e. /dev/nvmeXnY) for a namespace with zero
disk capacity and also we don't have any entry for such disk under /sys/block/.  

6.9-rc1 kernel:
---------------

# nvme list -v 

Failed to open ns nvme0n3, errno 2 <== error is printed first followed by output

Subsystem        Subsystem-NQN                                                                                    Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys0     nqn.2019-10.com.kioxia:KCM7DRUG1T92:3D60A04906N1                                                 nvme0, nvme2

Device   SN                   MN                                       FR       TxPort Asdress        Slot   Subsystem    Namespaces      
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme0    3D60A04906N1         1.6TB NVMe Gen4 U.2 SSD IV               REV.CAS2 pcie   0524:28:00.0          nvme-subsys0 nvme0n1
nvme2    3D60A04906N1         1.6TB NVMe Gen4 U.2 SSD IV               REV.CAS2 pcie   0584:28:00.0          nvme-subsys0 

Device       Generic      NSID       Usage                      Format           Controllers     
------------ ------------ ---------- -------------------------- ---------------- ----------------
/dev/nvme0n1 /dev/ng0n1   0x1          0.00   B /   5.75  GB      4 KiB +  0 B   nvme0


# nvme list-subsys 

Failed to open ns nvme0n3, errno 2 <== error is printed first followed by output

nvme-subsys0 - NQN=nqn.2019-10.com.kioxia:KCM7DRUG1T92:3D60A04906N1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:41528538-e8ad-4eaf-84a7-9c552917d988
               iopolicy=numa
\
 +- nvme2 pcie 0584:28:00.0 live
 +- nvme0 pcie 0524:28:00.0 live

# nvme show-topology

Failed to open ns nvme0n3, errno 2 <== error is printed first followed by output

nvme-subsys0 - NQN=nqn.2019-10.com.kioxia:KCM7DRUG1T92:3D60A04906N1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:41528538-e8ad-4eaf-84a7-9c552917d988
               iopolicy=numa
\
 +- ns 1
 \
  +- nvme0 pcie 0524:28:00.0 live optimized

>From the above output it's evident that nvme-cli attempts to open the disk node /dev/nvme0n3 
however that entry doesn't exist. Apparently, on 6.9-rc1 kernel though head disk node /dev/nvme0n3
doesn't exit, the relevant entries /sys/block/nvme0c0n3 and /sys/block/nvme0n3 are present. 

As I understand, typically the nvme-cli command build the nvme subsystem topology first before 
printing the output. Here in this case, nvme-cli could find the nvme0c0n3 and nvme0n3 under 
/sys/block and so it assumes that there would be a corresponding disk node entry /dev/nvme0n3
show present however when nvme-cli attempts to open the /dev/nvme0n3 it fails and causing the 
observed symptom. 

Git bisect:
===========
The git bisect points to the below commit:

commit 46e7422cda8482aa3074c9caf4c224cf2fb74d71 (HEAD)
Author: Christoph Hellwig <hch at lst.de>
Date:   Mon Mar 4 07:04:54 2024 -0700

    nvme: move common logic into nvme_update_ns_info
    
    nvme_update_ns_info_generic and nvme_update_ns_info_block share a
    fair amount of logic related to not fully supported namespace
    formats and updating the multipath information.  Move this logic
    into the common caller.
    
    Signed-off-by: Christoph Hellwig <hch at lst.de>
    Signed-off-by: Keith Busch <kbusch at kernel.org>


In 6.9-rc1, it seems that with the above code restructuring, we would now hide the head disk 
node nvmeXnY showing up under /dev, however the relevant disk names nvmeXcYnZ and nvmeXnY do 
exist under /sys/block/. On 6.8 kernel, we don't create any disk node under /dev and as well
the corresponding disk folders under /sys/block if the disk capacity is zero. 

Thanks,
--Nilay







More information about the Linux-nvme mailing list