[PATCH] nvme-core: mark passthru requests RQF_QUIET flag

Alan Adamson alan.adamson at oracle.com
Mon Apr 11 11:31:55 PDT 2022



> On Apr 8, 2022, at 5:10 PM, Chaitanya Kulkarni <chaitanyak at nvidia.com> wrote:
> 
> 
>>> Can you please share a command line for "nvme admin-passthru"
>>> where this patch supresses messages ?
>> 
>> I have the NVME Fault Injector configured:
>> 
>> echo 0x286 > /sys/kernel/debug/${ctrl_dev}/fault_inject/status
>> echo 1000 > /sys/kernel/debug/${ctrl_dev}/fault_inject/times
>> echo 100 > /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
>> 
>> nvme admin-passthru /dev/${ctrl_dev} --opcode=06 --data-len=4096 --cdw10=1 -r
>> 
>> echo 0 >  /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
>> 
> 
> I was able to produce same admin-passthru error messages with my patch.
> See below detailed execution with the script and the log, can you please
> tell me what is missing ?

I reapplied your patch and reran my test and all looks good.  I must of applied the
incorrect patch last week.

Alan



> 
>>> -ck
>>> 
>>> 
>> 
> 
> * Without this patch I get 3 Error message from fault injection
> enabled :-
> 1. [ 1743.353266] nvme1: Identify(0x6), Invalid Field in
>    Command (sct 0x0 / sc 0x2) MORE DNR
> 
> 2. [ 1744.370690] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.370712] Call Trace:
> [ 1744.370715]  <TASK>
> [ 1744.370717]  dump_stack_lvl+0x48/0x5e
> [ 1744.370724]  should_fail.cold+0x32/0x37
> [ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.370754]  process_one_work+0x1af/0x380
> [ 1744.370758]  worker_thread+0x50/0x3a0
> [ 1744.370761]  ? rescuer_thread+0x370/0x370
> [ 1744.370763]  kthread+0xe7/0x110
> [ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.370770]  ret_from_fork+0x22/0x30
> [ 1744.370776]  </TASK>
> [ 1744.370783] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 
> 3. [1744.375045] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.375066] Call Trace:
> [ 1744.375070]  <TASK>
> [ 1744.375072]  dump_stack_lvl+0x48/0x5e
> [ 1744.375079]  should_fail.cold+0x32/0x37
> [ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.375108]  process_one_work+0x1af/0x380
> [ 1744.375113]  worker_thread+0x50/0x3a0
> [ 1744.375115]  ? rescuer_thread+0x370/0x370
> [ 1744.375118]  kthread+0xe7/0x110
> [ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.375124]  ret_from_fork+0x22/0x30
> [ 1744.375130]  </TASK>
> [ 1744.375148] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 
> 
> * With this patch I only get two from fault injection which are coming
> from nvme-admin-passthru and no from the internal passthru:-
> 
> 1. [ 1765.175570] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.175593] Call Trace:
> [ 1765.175596]  <TASK>
> [ 1765.175599]  dump_stack_lvl+0x48/0x5e
> [ 1765.175605]  should_fail.cold+0x32/0x37
> [ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.175636]  process_one_work+0x1af/0x380
> [ 1765.175640]  worker_thread+0x50/0x3a0
> [ 1765.175643]  ? rescuer_thread+0x370/0x370
> [ 1765.175645]  kthread+0xe7/0x110
> [ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.175652]  ret_from_fork+0x22/0x30
> [ 1765.175658]  </TASK>
> [ 1765.175664] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 2. [ 1765.179829] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
>  OE     5.17.0-rc2nvme+ #68
> [ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.179850] Call Trace:
> [ 1765.179853]  <TASK>
> [ 1765.179855]  dump_stack_lvl+0x48/0x5e
> [ 1765.179862]  should_fail.cold+0x32/0x37
> [ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.179892]  process_one_work+0x1af/0x380
> [ 1765.179897]  ? rescuer_thread+0x370/0x370
> [ 1765.179899]  worker_thread+0x50/0x3a0
> [ 1765.179902]  ? rescuer_thread+0x370/0x370
> [ 1765.179903]  kthread+0xe7/0x110
> [ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.179911]  ret_from_fork+0x22/0x30
> [ 1765.179917]  </TASK>
> [ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> 
> 
> * Detailed test log with nvmeof nvme-loop controller configured
> and fault injection enabled with and without this patch to mask
> internal passthru. It shows only internal passthru error message
> is masked with this patch and internal passthru error message
> is present without this patch, nvme-admin-passthru error message
> present in both cases :-
> 
> nvme (nvme-5.18) # sh error_inject.sh
> commit 7bec02cef3d11f3d3a80bfa8739f790377bac8d6 (HEAD -> nvme-5.18)
> Merge: 226d991feef9 a4a6f3c8f61c
> Author: Chaitanya Kulkarni <kch at nvidia.com>
> Date:   Fri Apr 8 16:57:45 2022 -0700
> 
>     Merge branch 'nvme-5.18' of git://git.infradead.org/nvme into nvme-5.18
> + umount /mnt/nvme0n1
> + clear_dmesg
> ./compile_nvme.sh: line 3: clear_dmesg: command not found
> umount: /mnt/nvme0n1: no mount point specified.
> + ./delete.sh
> + NQN=testnqn
> + nvme disconnect -n testnqn
> Failed to scan topoplogy: No such file or directory
> 
> real	0m0.002s
> user	0m0.000s
> sys	0m0.002s
> + rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
> + rmdir /sys/kernel/config/nvmet/ports/1
> rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
> or directory
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> ./delete.sh: line 14: 
> /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
> or directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
> rmdir: failed to remove 
> '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
> directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*'
> rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
> file or directory
> + rmdir 'config/nullb/nullb*'
> rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> + modprobe -r nvme-fabrics
> + modprobe -r nvme_loop
> + modprobe -r nvmet
> + modprobe -r nvme
> + sleep 1
> + modprobe -r nvme-core
> + lsmod
> + grep nvme
> + sleep 1
> + git diff
> + sleep 1
> ++ nproc
> + make -j 48 M=drivers/nvme/target/ clean
> ++ nproc
> + make -j 48 M=drivers/nvme/ modules
>   CC [M]  drivers/nvme/target/core.o
>   CC [M]  drivers/nvme/target/configfs.o
>   CC [M]  drivers/nvme/target/admin-cmd.o
>   CC [M]  drivers/nvme/target/fabrics-cmd.o
>   CC [M]  drivers/nvme/target/discovery.o
>   CC [M]  drivers/nvme/target/io-cmd-file.o
>   CC [M]  drivers/nvme/target/io-cmd-bdev.o
>   CC [M]  drivers/nvme/target/passthru.o
>   CC [M]  drivers/nvme/target/zns.o
>   CC [M]  drivers/nvme/target/trace.o
>   CC [M]  drivers/nvme/target/loop.o
>   CC [M]  drivers/nvme/target/rdma.o
>   CC [M]  drivers/nvme/target/fc.o
>   CC [M]  drivers/nvme/target/fcloop.o
>   CC [M]  drivers/nvme/target/tcp.o
>   CC [M]  drivers/nvme/host/core.o
>   LD [M]  drivers/nvme/target/nvme-loop.o
>   LD [M]  drivers/nvme/target/nvme-fcloop.o
>   LD [M]  drivers/nvme/target/nvmet.o
>   LD [M]  drivers/nvme/target/nvmet-tcp.o
>   LD [M]  drivers/nvme/target/nvmet-fc.o
>   LD [M]  drivers/nvme/target/nvmet-rdma.o
>   LD [M]  drivers/nvme/host/nvme-core.o
>   MODPOST drivers/nvme/Module.symvers
>   LD [M]  drivers/nvme/host/nvme-core.ko
>   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
>   CC [M]  drivers/nvme/target/nvme-loop.mod.o
>   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
>   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
>   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
>   CC [M]  drivers/nvme/target/nvmet.mod.o
>   LD [M]  drivers/nvme/target/nvme-loop.ko
>   LD [M]  drivers/nvme/target/nvme-fcloop.ko
>   LD [M]  drivers/nvme/target/nvmet-rdma.ko
>   LD [M]  drivers/nvme/target/nvmet-fc.ko
>   LD [M]  drivers/nvme/target/nvmet-tcp.ko
>   LD [M]  drivers/nvme/target/nvmet.ko
> + HOST=drivers/nvme/host
> + TARGET=drivers/nvme/target
> ++ uname -r
> + HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
> ++ uname -r
> + TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
> + cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
> drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
> drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
> + cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
> drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
> drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> + ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
> total 6.3M
> -rw-r--r--. 1 root root 2.7M Apr  8 16:58 nvme-core.ko
> -rw-r--r--. 1 root root 426K Apr  8 16:58 nvme-fabrics.ko
> -rw-r--r--. 1 root root 925K Apr  8 16:58 nvme-fc.ko
> -rw-r--r--. 1 root root 714K Apr  8 16:58 nvme.ko
> -rw-r--r--. 1 root root 856K Apr  8 16:58 nvme-rdma.ko
> -rw-r--r--. 1 root root 799K Apr  8 16:58 nvme-tcp.ko
> 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
> total 6.3M
> -rw-r--r--. 1 root root 475K Apr  8 16:58 nvme-fcloop.ko
> -rw-r--r--. 1 root root 419K Apr  8 16:58 nvme-loop.ko
> -rw-r--r--. 1 root root 734K Apr  8 16:58 nvmet-fc.ko
> -rw-r--r--. 1 root root 3.2M Apr  8 16:58 nvmet.ko
> -rw-r--r--. 1 root root 822K Apr  8 16:58 nvmet-rdma.ko
> -rw-r--r--. 1 root root 671K Apr  8 16:58 nvmet-tcp.ko
> + sync
> + sync
> + sync
> + modprobe nvme
> + echo 'Press enter to continue ...'
> Press enter to continue ...
> + read next
> 
> ++ NN=1
> ++ NQN=testnqn
> ++ let NR_DEVICES=NN+1
> ++ modprobe -r null_blk
> ++ modprobe -r nvme
> ++ modprobe null_blk nr_devices=0
> ++ modprobe nvme
> ++ modprobe nvme-fabrics
> ++ modprobe nvmet
> ++ modprobe nvme-loop
> ++ dmesg -c
> ++ sleep 2
> ++ tree /sys/kernel/config
> /sys/kernel/config
> ├── nullb
> │   └── features
> └── nvmet
>     ├── hosts
>     ├── ports
>     └── subsystems
> 
> 5 directories, 1 file
> ++ sleep 1
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
> +++ shuf -i 1-1 -n 1
> ++ for i in `shuf -i  1-$NN -n $NN`
> ++ mkdir config/nullb/nullb1
> ++ echo 1
> ++ echo 4096
> ++ echo 2048
> ++ echo 1
> +++ cat config/nullb/nullb1/index
> ++ IDX=0
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> ++ echo ' ####### /dev/nullb0'
>  ####### /dev/nullb0
> ++ echo -n /dev/nullb0
> ++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
> /dev/nullb0
> ++ echo 1
> ++ dmesg -c
> [ 1740.356540] nvme nvme0: 48/0/0 default/read/poll queues
> [ 1743.345785] nvmet: adding nsid 1 to subsystem testnqn
> ++ mkdir /sys/kernel/config/nvmet/ports/1/
> ++ echo -n loop
> ++ echo -n 1
> ++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
> /sys/kernel/config/nvmet/ports/1/subsystems/
> ++ echo transport=loop,nqn=testnqn
> ++ sleep 1
> ++ mount
> ++ column -t
> ++ grep nvme
> ++ dmesg -c
> [ 1743.353177] nvmet: creating nvm controller 1 for subsystem testnqn 
> for NQN 
> nqn.2014-08.org.nvmexpress:uuid:510d0435-0ad7-49d4-ae4a-f1c1552b0f0c.
> [ 1743.353266] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / 
> sc 0x2) MORE DNR
> [ 1743.355380] nvme nvme1: creating 48 I/O queues.
> [ 1743.359206] nvme nvme1: new ctrl: "testnqn"
> Node SN Model Namespace Usage Format FW Rev
> nvme1n1 8bdab4b79aca987d0eba Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> NVMe status: Access Denied: Access to the namespace and/or LBA range is 
> denied due to lack of access rights(0x4286)
> Node SN Model Namespace Usage Format FW Rev
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> [ 1744.370690] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.370712] Call Trace:
> [ 1744.370715]  <TASK>
> [ 1744.370717]  dump_stack_lvl+0x48/0x5e
> [ 1744.370724]  should_fail.cold+0x32/0x37
> [ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.370754]  process_one_work+0x1af/0x380
> [ 1744.370758]  worker_thread+0x50/0x3a0
> [ 1744.370761]  ? rescuer_thread+0x370/0x370
> [ 1744.370763]  kthread+0xe7/0x110
> [ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.370770]  ret_from_fork+0x22/0x30
> [ 1744.370776]  </TASK>
> [ 1744.370783] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> [ 1744.375045] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.375066] Call Trace:
> [ 1744.375070]  <TASK>
> [ 1744.375072]  dump_stack_lvl+0x48/0x5e
> [ 1744.375079]  should_fail.cold+0x32/0x37
> [ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.375108]  process_one_work+0x1af/0x380
> [ 1744.375113]  worker_thread+0x50/0x3a0
> [ 1744.375115]  ? rescuer_thread+0x370/0x370
> [ 1744.375118]  kthread+0xe7/0x110
> [ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.375124]  ret_from_fork+0x22/0x30
> [ 1744.375130]  </TASK>
> [ 1744.375148] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> + NQN=testnqn
> + nvme disconnect -n testnqn
> NQN:testnqn disconnected 1 controller(s)
> 
> real	0m0.370s
> user	0m0.001s
> sys	0m0.004s
> + rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
> + rmdir /sys/kernel/config/nvmet/ports/1
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn
> + rmdir config/nullb/nullb1
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> From 2d552b2c756fce48f53d66c5c58ffb6b3e3cac6e Mon Sep 17 00:00:00 2001
> From: Chaitanya Kulkarni <kch at nvidia.com>
> Date: Fri, 8 Apr 2022 16:45:18 -0700
> Subject: [PATCH] nvme-core: mark internal passthru req REQ_QUIET
> 
> Signed-off-by: Chaitanya Kulkarni <kch at nvidia.com>
> ---
>  drivers/nvme/host/core.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index f204c6f78b5b..a1ea2f736d42 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>  {
>  	blk_status_t status = nvme_error_status(nvme_req(req)->status);
> 
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>  		nvme_log_error(req);
>  	nvme_end_req_zoned(req);
>  	nvme_trace_bio_complete(req);
> @@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue 
> *q, struct nvme_command *cmd,
>  	else
>  		req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>  						qid ? qid - 1 : 0);
> -
>  	if (IS_ERR(req))
>  		return PTR_ERR(req);
> +
> +	req->rq_flags |= RQF_QUIET;
> +
>  	nvme_init_request(req, cmd);
> 
>  	if (timeout)
> -- 
> 2.29.0
> 
> Applying: nvme-core: mark internal passthru req REQ_QUIET
> 
> commit 3c7952e0f8fcc9affdfa0249ab8211c18a513338 (HEAD -> nvme-5.18)
> Author: Chaitanya Kulkarni <kch at nvidia.com>
> Date:   Fri Apr 8 16:45:18 2022 -0700
> 
>     nvme-core: mark internal passthru req REQ_QUIET
> 
>     Signed-off-by: Chaitanya Kulkarni <kch at nvidia.com>
> + umount /mnt/nvme0n1
> + clear_dmesg
> ./compile_nvme.sh: line 3: clear_dmesg: command not found
> umount: /mnt/nvme0n1: no mount point specified.
> + ./delete.sh
> + NQN=testnqn
> + nvme disconnect -n testnqn
> Failed to scan topoplogy: No such file or directory
> 
> real	0m0.002s
> user	0m0.002s
> sys	0m0.000s
> + rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
> + rmdir /sys/kernel/config/nvmet/ports/1
> rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
> or directory
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> ./delete.sh: line 14: 
> /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
> or directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
> rmdir: failed to remove 
> '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
> directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*'
> rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
> file or directory
> + rmdir 'config/nullb/nullb*'
> rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> + modprobe -r nvme-fabrics
> + modprobe -r nvme_loop
> + modprobe -r nvmet
> + modprobe -r nvme
> + sleep 1
> + modprobe -r nvme-core
> + lsmod
> + grep nvme
> + sleep 1
> + git diff
> + sleep 1
> ++ nproc
> + make -j 48 M=drivers/nvme/target/ clean
> ++ nproc
> + make -j 48 M=drivers/nvme/ modules
>   CC [M]  drivers/nvme/target/core.o
>   CC [M]  drivers/nvme/target/configfs.o
>   CC [M]  drivers/nvme/target/admin-cmd.o
>   CC [M]  drivers/nvme/target/fabrics-cmd.o
>   CC [M]  drivers/nvme/target/discovery.o
>   CC [M]  drivers/nvme/target/io-cmd-file.o
>   CC [M]  drivers/nvme/target/io-cmd-bdev.o
>   CC [M]  drivers/nvme/target/passthru.o
>   CC [M]  drivers/nvme/target/zns.o
>   CC [M]  drivers/nvme/target/trace.o
>   CC [M]  drivers/nvme/target/loop.o
>   CC [M]  drivers/nvme/target/rdma.o
>   CC [M]  drivers/nvme/target/fc.o
>   CC [M]  drivers/nvme/target/fcloop.o
>   CC [M]  drivers/nvme/target/tcp.o
>   CC [M]  drivers/nvme/host/core.o
>   LD [M]  drivers/nvme/target/nvme-loop.o
>   LD [M]  drivers/nvme/target/nvme-fcloop.o
>   LD [M]  drivers/nvme/target/nvmet.o
>   LD [M]  drivers/nvme/target/nvmet-fc.o
>   LD [M]  drivers/nvme/target/nvmet-tcp.o
>   LD [M]  drivers/nvme/target/nvmet-rdma.o
>   LD [M]  drivers/nvme/host/nvme-core.o
>   MODPOST drivers/nvme/Module.symvers
>   CC [M]  drivers/nvme/host/nvme-core.mod.o
>   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
>   CC [M]  drivers/nvme/target/nvme-loop.mod.o
>   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
>   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
>   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
>   CC [M]  drivers/nvme/target/nvmet.mod.o
>   LD [M]  drivers/nvme/host/nvme-core.ko
>   LD [M]  drivers/nvme/target/nvmet-fc.ko
>   LD [M]  drivers/nvme/target/nvmet-rdma.ko
>   LD [M]  drivers/nvme/target/nvme-fcloop.ko
>   LD [M]  drivers/nvme/target/nvmet.ko
>   LD [M]  drivers/nvme/target/nvme-loop.ko
>   LD [M]  drivers/nvme/target/nvmet-tcp.ko
> + HOST=drivers/nvme/host
> + TARGET=drivers/nvme/target
> ++ uname -r
> + HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
> ++ uname -r
> + TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
> + cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
> drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
> drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
> + cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
> drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
> drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> + ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
> total 6.3M
> -rw-r--r--. 1 root root 2.7M Apr  8 16:59 nvme-core.ko
> -rw-r--r--. 1 root root 426K Apr  8 16:59 nvme-fabrics.ko
> -rw-r--r--. 1 root root 925K Apr  8 16:59 nvme-fc.ko
> -rw-r--r--. 1 root root 714K Apr  8 16:59 nvme.ko
> -rw-r--r--. 1 root root 856K Apr  8 16:59 nvme-rdma.ko
> -rw-r--r--. 1 root root 799K Apr  8 16:59 nvme-tcp.ko
> 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
> total 6.3M
> -rw-r--r--. 1 root root 475K Apr  8 16:59 nvme-fcloop.ko
> -rw-r--r--. 1 root root 419K Apr  8 16:59 nvme-loop.ko
> -rw-r--r--. 1 root root 734K Apr  8 16:59 nvmet-fc.ko
> -rw-r--r--. 1 root root 3.2M Apr  8 16:59 nvmet.ko
> -rw-r--r--. 1 root root 822K Apr  8 16:59 nvmet-rdma.ko
> -rw-r--r--. 1 root root 671K Apr  8 16:59 nvmet-tcp.ko
> + sync
> + sync
> + sync
> + modprobe nvme
> + echo 'Press enter to continue ...'
> Press enter to continue ...
> + read next
> 
> ++ NN=1
> ++ NQN=testnqn
> ++ let NR_DEVICES=NN+1
> ++ modprobe -r null_blk
> ++ modprobe -r nvme
> ++ modprobe null_blk nr_devices=0
> ++ modprobe nvme
> ++ modprobe nvme-fabrics
> ++ modprobe nvmet
> ++ modprobe nvme-loop
> ++ dmesg -c
> ++ sleep 2
> ++ tree /sys/kernel/config
> /sys/kernel/config
> ├── nullb
> │   └── features
> └── nvmet
>     ├── hosts
>     ├── ports
>     └── subsystems
> 
> 5 directories, 1 file
> ++ sleep 1
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
> +++ shuf -i 1-1 -n 1
> ++ for i in `shuf -i  1-$NN -n $NN`
> ++ mkdir config/nullb/nullb1
> ++ echo 1
> ++ echo 4096
> ++ echo 2048
> ++ echo 1
> +++ cat config/nullb/nullb1/index
> ++ IDX=0
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> ++ echo ' ####### /dev/nullb0'
>  ####### /dev/nullb0
> ++ echo -n /dev/nullb0
> ++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
> /dev/nullb0
> ++ echo 1
> ++ dmesg -c
> [ 1761.160765] nvme nvme0: 48/0/0 default/read/poll queues
> [ 1764.151446] nvmet: adding nsid 1 to subsystem testnqn
> ++ mkdir /sys/kernel/config/nvmet/ports/1/
> ++ echo -n loop
> ++ echo -n 1
> ++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
> /sys/kernel/config/nvmet/ports/1/subsystems/
> ++ echo transport=loop,nqn=testnqn
> ++ sleep 1
> ++ mount
> ++ column -t
> ++ grep nvme
> ++ dmesg -c
> [ 1764.158721] nvmet: creating nvm controller 1 for subsystem testnqn 
> for NQN 
> nqn.2014-08.org.nvmexpress:uuid:33bca6cd-e82c-4de0-bba2-f70070f69097.
> [ 1764.158827] nvme nvme1: creating 48 I/O queues.
> [ 1764.162745] nvme nvme1: new ctrl: "testnqn"
> Node SN Model Namespace Usage Format FW Rev
> nvme1n1 bc09a2ee2829a09471a3 Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> NVMe status: Access Denied: Access to the namespace and/or LBA range is 
> denied due to lack of access rights(0x4286)
> Node SN Model Namespace Usage Format FW Rev
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> [ 1765.175570] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.175593] Call Trace:
> [ 1765.175596]  <TASK>
> [ 1765.175599]  dump_stack_lvl+0x48/0x5e
> [ 1765.175605]  should_fail.cold+0x32/0x37
> [ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.175636]  process_one_work+0x1af/0x380
> [ 1765.175640]  worker_thread+0x50/0x3a0
> [ 1765.175643]  ? rescuer_thread+0x370/0x370
> [ 1765.175645]  kthread+0xe7/0x110
> [ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.175652]  ret_from_fork+0x22/0x30
> [ 1765.175658]  </TASK>
> [ 1765.175664] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> [ 1765.179829] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
>  OE     5.17.0-rc2nvme+ #68
> [ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.179850] Call Trace:
> [ 1765.179853]  <TASK>
> [ 1765.179855]  dump_stack_lvl+0x48/0x5e
> [ 1765.179862]  should_fail.cold+0x32/0x37
> [ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.179892]  process_one_work+0x1af/0x380
> [ 1765.179897]  ? rescuer_thread+0x370/0x370
> [ 1765.179899]  worker_thread+0x50/0x3a0
> [ 1765.179902]  ? rescuer_thread+0x370/0x370
> [ 1765.179903]  kthread+0xe7/0x110
> [ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.179911]  ret_from_fork+0x22/0x30
> [ 1765.179917]  </TASK>
> [ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> + NQN=testnqn
> + nvme disconnect -n testnqn
> NQN:testnqn disconnected 1 controller(s)
> 
> real	0m0.347s
> user	0m0.002s
> sys	0m0.004s
> + rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
> + rmdir /sys/kernel/config/nvmet/ports/1
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn
> + rmdir config/nullb/nullb1
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> HEAD is now at 7bec02cef3d1 Merge branch 'nvme-5.18' of 
> git://git.infradead.org/nvme into nvme-5.18
> 



More information about the Linux-nvme mailing list