[PATCH 2/2] nvme: add 'queue_if_no_path' semantics

Hannes Reinecke hare at suse.de
Tue Oct 6 09:30:14 EDT 2020


On 10/6/20 10:39 AM, Christoph Hellwig wrote:
> On Tue, Oct 06, 2020 at 10:29:49AM +0200, Hannes Reinecke wrote:
>>> All multipath devices should behave the same.  No special casing for
>>> PCIe, please.
>>>
>> Even if the default behaviour breaks PCI hotplug?
> 
> Why would it "break" PCI hotplug?
> 
When running under MD RAID:
Before hotplug:
# nvme list
Node             SN                   Model 
       Namespace Usage                      Format           FW Rev
---------------- -------------------- 
---------------------------------------- --------- 
-------------------------- ---------------- --------
/dev/nvme0n1     SLESNVME1            QEMU NVMe Ctrl 
       1          17.18  GB /  17.18  GB    512   B +  0 B   1.0
/dev/nvme1n1     SLESNVME2            QEMU NVMe Ctrl 
       1           4.29  GB /   4.29  GB    512   B +  0 B   1.0
/dev/nvme2n1     SLESNVME3            QEMU NVMe Ctrl 
       1           4.29  GB /   4.29  GB    512   B +  0 B   1.0
After hotplug:

# nvme list
Node             SN                   Model 
       Namespace Usage                      Format           FW Rev
---------------- -------------------- 
---------------------------------------- --------- 
-------------------------- ---------------- --------
/dev/nvme0n1     SLESNVME1            QEMU NVMe Ctrl 
       1          17.18  GB /  17.18  GB    512   B +  0 B   1.0
/dev/nvme1n1     SLESNVME2            QEMU NVMe Ctrl 
       -1          0.00   B /   0.00   B      1   B +  0 B   1.0
/dev/nvme1n2     SLESNVME2            QEMU NVMe Ctrl 
       1           4.29  GB /   4.29  GB    512   B +  0 B   1.0
/dev/nvme2n1     SLESNVME3            QEMU NVMe Ctrl 
       1           4.29  GB /   4.29  GB    512   B +  0 B   1.0

And MD hasn't been notified that the device is gone:
# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 nvme2n1[1] nvme1n1[0]
       4189184 blocks super 1.2 2 near-copies [2/2] [UU]
       bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

Once I do some I/O to it MD recognized a faulty device:

# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 nvme2n1[1] nvme1n1[0](F)
       4189184 blocks super 1.2 2 near-copies [2/1] [_U]
       bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

but the re-added device isn't added to the MD RAID.
In fact, it has been assigned a _different_ namespace ID:

[  904.299065] pcieport 0000:00:08.0: pciehp: Slot(0-1): Card present
[  904.299067] pcieport 0000:00:08.0: pciehp: Slot(0-1): Link Up
[  904.435314] pci 0000:02:00.0: [8086:5845] type 00 class 0x010802
[  904.435523] pci 0000:02:00.0: reg 0x10: [mem 0x00000000-0x00001fff 64bit]
[  904.435676] pci 0000:02:00.0: reg 0x20: [mem 0x00000000-0x00000fff]
[  904.436982] pci 0000:02:00.0: BAR 0: assigned [mem 
0xc1200000-0xc1201fff 64bit]
[  904.437086] pci 0000:02:00.0: BAR 4: assigned [mem 0xc1202000-0xc1202fff]
[  904.437118] pcieport 0000:00:08.0: PCI bridge to [bus 02]
[  904.437137] pcieport 0000:00:08.0:   bridge window [io  0x7000-0x7fff]
[  904.439024] pcieport 0000:00:08.0:   bridge window [mem 
0xc1200000-0xc13fffff]
[  904.440229] pcieport 0000:00:08.0:   bridge window [mem 
0x802000000-0x803ffffff 64bit pref]
[  904.447150] nvme nvme3: pci function 0000:02:00.0
[  904.447487] nvme 0000:02:00.0: enabling device (0000 -> 0002)
[  904.458880] nvme nvme3: 1/0/0 default/read/poll queues
[  904.461296] nvme1n2: detected capacity change from 0 to 4294967296

and the 'old', pre-hotplug device still lingers on in the 'nvme list' 
output.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare at suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



More information about the Linux-nvme mailing list