[PATCH] Revert "nvme: verify MNAN value if ANA is enabled"

Chaitanya Kulkarni Chaitanya.Kulkarni at wdc.com
Thu Jun 10 14:01:14 PDT 2021


On 6/10/21 00:45, Daniel Wagner wrote:
>> diff --git a/drivers/nvme/target/admin-cmd.c
>> b/drivers/nvme/target/admin-cmd.c
>> index cd60a8184d04..a8ec377bb68d 100644
>> --- a/drivers/nvme/target/admin-cmd.c
>> +++ b/drivers/nvme/target/admin-cmd.c
>> @@ -394,7 +394,7 @@ static void nvmet_execute_identify_ctrl(struct
>> nvmet_req *req)
>>         id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
>>  
>>         id->nn = cpu_to_le32(ctrl->subsys->max_nsid);
>> -       id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
>> +       id->mnan = cpu_to_le32(ctrl->subsys->max_nsid);
>>         id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM |
>>                         NVME_CTRL_ONCS_WRITE_ZEROES);
> This looks like the right fix for the upper limit problem.
> I can see the tests are still failing with
>
>   nvme nvme0: Invalid MNAN value 0
>
> which indicates there is another problem in the nvmet core.
>
> I try to find it.x
>
> Thanks,
> Daniel
>


With following two patches blktests test-cases are passing for me.

root at vm nvme (nvme-5.14) # git log -2
commit cd5d6d55a405f97ad5c31a738976eea0e16ebe20 (HEAD -> nvme-5.14)
Author: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Date:   Wed Jun 9 20:07:56 2021 -0700

    nvmet: set the mnan value to nn
   
    From spec :-
    "MNAN : If the controller supports Asymmetric Namespace Access
Reporting,
    then this field shall be set to a non-zero value that is less than or
    equal tothe NN value".
   
    In nvmet_execute_identify_ctrl() when building the identify controller
    data structure set the mnan value to nn to follow the spec.
   
    Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>

commit 1368a1a5e7566d726bf74234d05895c3f0d54690
Author: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Date:   Wed Jun 9 20:07:00 2021 -0700

    nvme: fix the comparison in the mnan check
   
    The existing check for the valid mnan value will result in the error
    when ctrl->max_namespaces are set to the 1024 from NVMeOF target since
    !1024 == 0 so it will lead to next comparison 1024 > is->nn which will
    be always true untill target has 1024 namespaces.
   
    This patch fixes the comparison.
   
    Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
root at vm nvme (nvme-5.14) # cat
0001-nvme-fix-the-comparison-in-the-mnan-check.patch
>From f1f0947ca495b7d9b8721411e407ebf9efad0df9 Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Date: Wed, 9 Jun 2021 20:07:00 -0700
Subject: [PATCH 1/2] nvme: fix the comparison in the mnan check

The existing check for the valid mnan value will result in the error
when ctrl->max_namespaces are set to the 1024 from NVMeOF target since
!1024 == 0 so it will lead to next comparison 1024 > is->nn which will
be always true untill target has 1024 namespaces.

This patch fixes the comparison.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/host/multipath.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 23573fe3fc7d..4277f1554bd5 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -813,7 +813,7 @@ int nvme_mpath_init_identify(struct nvme_ctrl *ctrl,
struct nvme_id_ctrl *id)
         !(ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA))
         return 0;
 
-    if (!ctrl->max_namespaces ||
+    if (ctrl->max_namespaces &&
         ctrl->max_namespaces > le32_to_cpu(id->nn)) {
         dev_err(ctrl->device,
             "Invalid MNAN value %u\n", ctrl->max_namespaces);
-- 
2.22.1

root at vm nvme (nvme-5.14) # cat 0002-nvmet-set-the-mnan-value-to-nn.patch
>From d18c2c851686c4e5fa8ddb457e6726cafb86b85f Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Date: Wed, 9 Jun 2021 20:07:56 -0700
Subject: [PATCH 2/2] nvmet: set the mnan value to nn

>From spec :-
"MNAN : If the controller supports Asymmetric Namespace Access Reporting,
then this field shall be set to a non-zero value that is less than or
equal tothe NN value".

In nvmet_execute_identify_ctrl() when building the identify controller
data structure set the mnan value to nn to follow the spec.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
---
 drivers/nvme/target/admin-cmd.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/admin-cmd.c
b/drivers/nvme/target/admin-cmd.c
index cd60a8184d04..88e85d9a5be8 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -394,7 +394,11 @@ static void nvmet_execute_identify_ctrl(struct
nvmet_req *req)
     id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
 
     id->nn = cpu_to_le32(ctrl->subsys->max_nsid);
-    id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
+    if (IS_ENABLED(CONFIG_NVME_MULTIPATH))
+        id->mnan = cpu_to_le32(ctrl->subsys->max_nsid);
+    else
+        id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
+
     id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM |
             NVME_CTRL_ONCS_WRITE_ZEROES);
 
-- 
2.22.1

root at vm nvme (nvme-5.14) # ./compile_nvme.sh
+ umount /mnt/nvme0n1
+ clear_dmesg
umount: /mnt/nvme0n1: not mounted
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ git apply ./all-fixes.diff
+ sleep 1
++ nproc
+ make -j 64 M=drivers/nvme/ modules
  CC [M]  drivers/nvme//target/loop.o
  LD [M]  drivers/nvme//target/nvme-loop.o
  MODPOST drivers/nvme//Module.symvers
  LD [M]  drivers/nvme//target/nvme-loop.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko
drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko
drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko
/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko
drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko
drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko
/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/host/
/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/target//
/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/host/:
total 9.1M
-rw-r--r--. 1 root root 4.0M Jun 10 13:50 nvme-core.ko
-rw-r--r--. 1 root root 639K Jun 10 13:50 nvme-fabrics.ko
-rw-r--r--. 1 root root 1.2M Jun 10 13:50 nvme-fc.ko
-rw-r--r--. 1 root root 1.1M Jun 10 13:50 nvme.ko
-rw-r--r--. 1 root root 1.2M Jun 10 13:50 nvme-rdma.ko
-rw-r--r--. 1 root root 1.1M Jun 10 13:50 nvme-tcp.ko

/lib/modules/5.13.0-rc3nvme+/kernel/drivers/nvme/target//:
total 8.3M
-rw-r--r--. 1 root root 718K Jun 10 13:50 nvme-fcloop.ko
-rw-r--r--. 1 root root 623K Jun 10 13:50 nvme-loop.ko
-rw-r--r--. 1 root root 1.1M Jun 10 13:50 nvmet-fc.ko
-rw-r--r--. 1 root root 4.0M Jun 10 13:50 nvmet.ko
-rw-r--r--. 1 root root 1.1M Jun 10 13:50 nvmet-rdma.ko
-rw-r--r--. 1 root root 868K Jun 10 13:50 nvmet-tcp.ko
+ modprobe nvme
+ git co drivers/nvme/target/loop.c
Updated 1 path from the index
root at vm nvme (nvme-5.14) # cdblktests
root at vm blktests (master) # ./check nvme
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  40.408s  ...  40.579s
nvme/003 (test if we're sending keep-alives to a discovery controller)
[passed]
    runtime  10.179s  ...  10.180s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.786s  ...  1.794s
nvme/005 (reset local loopback target)                       [passed]
    runtime  2.241s  ...  2.233s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.147s  ...  0.152s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.097s  ...  0.102s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.793s  ...  1.792s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.749s  ...  1.751s
nvme/010 (run data verification fio job on NVMeOF block device-backed
ns) [passed]
    runtime  38.113s  ...  31.199s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  296.910s  ...  302.248s
nvme/012 (run mkfs and data verification fio job on NVMeOF block
device-backed ns) [passed]
    runtime  8.823s  ...  9.221s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed
ns) [passed]
    runtime  23.517s  ...  24.288s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  22.787s  ...  20.386s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  20.599s  ...  23.212s
nvme/016 (create/delete many NVMeOF block device-backed ns and test
discovery) [passed]
    runtime  21.616s  ...  21.016s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime    ...  20.721s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.777s  ...  1.752s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.795s  ...  1.805s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.756s  ...  1.746s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.734s  ...  1.745s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  2.168s  ...  2.181s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.782s  ...  1.776s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.730s  ...  1.743s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.719s  ...  1.767s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.718s  ...  1.732s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.754s  ...  1.748s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.753s  ...  1.776s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  2.280s  ...  2.244s
nvme/030 (ensure the discovery generation counter is updated
appropriately) [passed]
    runtime  0.425s  ...  0.401s
nvme/031 (test deletion of NVMeOF controllers immediately after setup)
[passed]
    runtime  6.134s  ...  5.996s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.075s  ...  0.059s
root at vm blktests (master) #





More information about the Linux-nvme mailing list