Should NVME_SC_INVALID_NS be translated to BLK_STS_IOERR instead of BLK_STS_NOTSUPP so that multipath(both native and dm) can failover on the failure?

Sagi Grimberg sagi at grimberg.me
Thu Jan 4 03:56:12 PST 2024



On 1/3/24 12:24, Jirong Feng wrote:
>> OK, can you please check nvme native mpath as well?
> 
> switch to nvme native mpath:
> 
> [root at fjr-vm1 ~]# nvme list-subsys
> nvme-subsys0 - 
> NQN=nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06
> \
>   +- nvme0 tcp traddr=192.168.111.99 trsvcid=4420 live
>   +- nvme1 tcp traddr=192.168.111.111 trsvcid=4420 live
> [root at fjr-vm1 ~]# multipath -ll
> uuid.cf4bb93c-949f-4532-a5c1-b8bd267a4e06 [nvme]:nvme0n1 
> NVMe,Linux,6.6.0-my
> size=209715200 features='n/a' hwhandler='ANA' wp=rw
> |-+- policy='n/a' prio=50 status=optimized
> | `- 0:0:1 nvme0c0n1 0:0 n/a optimized live
> `-+- policy='n/a' prio=50 status=optimized
>    `- 0:1:1 nvme0c1n1 0:0 n/a optimized live
> 
> fio still keeps running without any error, just for this time. (see below)
> 
> host dmesg:
> 
> [Wed Jan  3 07:42:55 2024] nvme nvme0: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:42:55 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:00 2024] nvme nvme0: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:00 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 0
> [Wed Jan  3 07:43:05 2024] nvme nvme0: ANA group 1: optimized.
> [Wed Jan  3 07:43:05 2024] nvme nvme0: creating 4 I/O queues.
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 1
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 2
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 3
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 4
> [Wed Jan  3 07:43:05 2024] nvme nvme0: rescanning namespaces.
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 0
> [Wed Jan  3 07:43:05 2024] nvme nvme0: ANA group 1: optimized.
> [Wed Jan  3 07:43:05 2024] nvme nvme0: creating 4 I/O queues.
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 1
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 2
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 3
> [Wed Jan  3 07:43:05 2024] nvme nvme0: connecting queue 4
> [Wed Jan  3 07:43:05 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:10 2024] nvme nvme0: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:10 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> 
> target dmesg:
> 
> [Wed Jan  3 07:41:23 2024] nvmet: ctrl 1 update keep-alive timer for 15 
> secs
> [Wed Jan  3 07:41:33 2024] nvmet: ctrl 1 update keep-alive timer for 15 
> secs
> [Wed Jan  3 07:41:43 2024] nvmet: ctrl 1 update keep-alive timer for 15 
> secs
> [Wed Jan  3 07:41:58 2024] nvmet: ctrl 1 reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:42:14 2024] nvmet: ctrl 1 reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:42:29 2024] nvmet: ctrl 1 reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:42:44 2024] nvmet: ctrl 1 reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:00 2024] nvmet: ctrl 1 reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 07:43:04 2024] nvmet: fjr add: returning 
> NVME_ANA_PERSISTENT_LOSS
> [Wed Jan  3 07:43:04 2024] nvmet_tcp: failed cmd 0000000034dfe760 id 14 
> opcode 1, data_len: 4096
> [Wed Jan  3 07:43:04 2024] nvmet: got cmd 12 while CC.EN == 0 on qid = 0
> [Wed Jan  3 07:43:04 2024] nvmet_tcp: failed cmd 00000000228b330a id 31 
> opcode 12, data_len: 0
> [Wed Jan  3 07:43:04 2024] nvmet: ctrl 2 start keep-alive timer for 15 secs
> [Wed Jan  3 07:43:04 2024] nvmet: ctrl 1 stop keep-alive
> [Wed Jan  3 07:43:04 2024] nvmet: creating nvm controller 2 for 
> subsystem 
> nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06 for NQN nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 1 to ctrl 2.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 2 to ctrl 2.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 3 to ctrl 2.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 4 to ctrl 2.
> [Wed Jan  3 07:43:04 2024] nvmet: fjr add: returning 
> NVME_ANA_PERSISTENT_LOSS
> [Wed Jan  3 07:43:04 2024] nvmet_tcp: failed cmd 00000000d9d3dba9 id 100 
> opcode 1, data_len: 4096
> [Wed Jan  3 07:43:04 2024] nvmet: ctrl 1 start keep-alive timer for 15 secs
> [Wed Jan  3 07:43:04 2024] nvmet: ctrl 2 stop keep-alive
> [Wed Jan  3 07:43:04 2024] nvmet: creating nvm controller 1 for 
> subsystem 
> nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06 for NQN nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 1 to ctrl 1.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 2 to ctrl 1.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 3 to ctrl 1.
> [Wed Jan  3 07:43:04 2024] nvmet: adding queue 4 to ctrl 1.
> [Wed Jan  3 07:43:14 2024] nvmet: ctrl 1 update keep-alive timer for 15 
> secs
> 
>>
>> Can you try returning NVME_SC_CTRL_PATH_ERROR instead of
>> NVME_SC_ANA_PERSISTENT_LOSS ?
> 
> I enabled/disabled again and again, found that fio keeps running for 
> most time, but occasionally(about 10% or less) fails and stops with error.
> 
> fio: io_u error on file /dev/nvme0n1: Input/output error: write 
> offset=100662296576, buflen=4096
> fio: pid=1485, err=5/file:io_u.c:1747, func=io_u error, 
> error=Input/output error
> 
> fio_iops: (groupid=0, jobs=1): err= 5 (file:io_u.c:1747, func=io_u 
> error, error=Input/output error): pid=1485: Wed Jan  3 08:44:09 2024
> 
> host dmesg:
> 
> [Wed Jan  3 08:44:06 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 08:44:07 2024] nvme nvme0: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 08:44:09 2024] nvme nvme0: connecting queue 0
> [Wed Jan  3 08:44:09 2024] nvme nvme0: ANA group 1: optimized.
> [Wed Jan  3 08:44:09 2024] nvme nvme0: creating 4 I/O queues.
> [Wed Jan  3 08:44:09 2024] nvme nvme0: connecting queue 1
> [Wed Jan  3 08:44:09 2024] nvme nvme0: connecting queue 2
> [Wed Jan  3 08:44:09 2024] nvme nvme0: connecting queue 3
> [Wed Jan  3 08:44:09 2024] nvme nvme0: connecting queue 4
> [Wed Jan  3 08:44:09 2024] nvme nvme0: rescanning namespaces.
> [Wed Jan  3 08:44:09 2024] Buffer I/O error on dev nvme0n1, logical 
> block 0, async page read
> [Wed Jan  3 08:44:09 2024]  nvme0n1: unable to read partition table
> [Wed Jan  3 08:44:09 2024] Buffer I/O error on dev nvme0n1, logical 
> block 6, async page read
> [Wed Jan  3 08:44:11 2024] nvme nvme1: reschedule traffic based 
> keep-alive timer
> [Wed Jan  3 08:44:14 2024] nvme nvme0: reschedule traffic based 
> keep-alive timer
> 
> target dmesg:
> 
> [Wed Jan  3 08:44:08 2024] nvmet: fjr add: returning 
> NVME_SC_CTRL_PATH_ERROR
> [Wed Jan  3 08:44:08 2024] nvmet_tcp: failed cmd 00000000c11e0ae7 id 53 
> opcode 1, data_len: 4096
> [Wed Jan  3 08:44:08 2024] nvmet: fjr add: returning 
> NVME_SC_CTRL_PATH_ERROR
> [Wed Jan  3 08:44:08 2024] nvmet_tcp: failed cmd 00000000e0d12c37 id 54 
> opcode 1, data_len: 4096
> [Wed Jan  3 08:44:08 2024] nvmet: ctrl 2 start keep-alive timer for 15 secs
> [Wed Jan  3 08:44:08 2024] nvmet: ctrl 1 stop keep-alive
> [Wed Jan  3 08:44:08 2024] nvmet: creating nvm controller 2 for 
> subsystem 
> nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06 for NQN nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
> [Wed Jan  3 08:44:08 2024] nvmet: adding queue 1 to ctrl 2.
> [Wed Jan  3 08:44:08 2024] nvmet: adding queue 2 to ctrl 2.
> [Wed Jan  3 08:44:08 2024] nvmet: adding queue 3 to ctrl 2.
> [Wed Jan  3 08:44:08 2024] nvmet: adding queue 4 to ctrl 2.
> [Wed Jan  3 08:44:18 2024] nvmet: ctrl 2 update keep-alive timer for 15 
> secs
> [Wed Jan  3 08:44:28 2024] nvmet: ctrl 2 update keep-alive timer for 15 
> secs
> 
> then back to returning NVME_ANA_PERSISTENT_LOSS, fio occasionally fails 
> too. log output are pretty the same.
> 
> then back to dm multipath, for about 50 times enable/disable, fio never 
> fails.
> 

Hmm, its interesting why you fail only in particular ios and not every
io. I suspect that there is a timing issue here.

Looking at the code, I suspect that ios continue being sent to the 
path'd namespace although they shouldn't. The reason is that if we
return an ana error, then the host will re-read the ana log page again
and find the namespace eligible for IO (the action of disable/enable
namespace does not impact the ana log), or,
we return a path error which is not ana error, in this case the host
will not re-read the ana log page, and the namespace will be re-selected
in the next IO (or at least nothing prevents it).

First of all, I think that the most suitable status for nvmet to return
in this case is: NVME_SC_INTERNAL_PATH_ERROR

 From the spec:
Internal Path Error: The command was not completed as the result of a
controller internal error that is specific to the controller processing
the command. Retries for the request function should be based on the
setting of the DNR bit (refer to Figure 92).

In the host code, I don't see any reference to such error status
returned by the controller. So I think we may want to pair it with
something like (this untested hunk):
--
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 0a88d7bdc5e3..0fb82056ba5f 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -97,6 +97,14 @@ void nvme_failover_req(struct request *req)
         if (nvme_is_ana_error(status) && ns->ctrl->ana_log_buf) {
                 set_bit(NVME_NS_ANA_PENDING, &ns->flags);
                 queue_work(nvme_wq, &ns->ctrl->ana_work);
+       } else if ((status & 0x7ff) == NVME_SC_INTERNAL_PATH_ERROR) {
+               /*
+                * The ctrl is telling us it is unable to reach the
+                * ns in a way that does not impact the entire ana
+                * group. The only way we can stop sending io to this
+                * specific namespace is by clearing its ready bit.
+                */
+               clear_bit(NVME_NS_READY, &ns->flags);
         }

         spin_lock_irqsave(&ns->head->requeue_lock, flags);
--

Keith, Christoph, do you agree that the host action when it sees an
error status like NVME_SC_INTERNAL_PATH_ERROR it needs to stop sending
IO to the namespace but not change anything related to ana?



More information about the Linux-nvme mailing list