[PATCHv2] nvme-mpath: delete disk after last connection

Hannes Reinecke hare at suse.de
Tue Apr 20 18:02:32 BST 2021


On 4/20/21 4:39 PM, Christoph Hellwig wrote:
> On Tue, Apr 20, 2021 at 11:14:36PM +0900, Keith Busch wrote:
>> On Tue, Apr 20, 2021 at 03:19:10PM +0200, Hannes Reinecke wrote:
>>> On 4/20/21 10:05 AM, Christoph Hellwig wrote:
>>>> On Fri, Apr 16, 2021 at 08:24:11AM +0200, Hannes Reinecke wrote:
>>>>> With the proposed patch, the following messages appear:
>>>>>
>>>>>    [  227.516807] md/raid1:md0: Disk failure on nvme3n1, disabling device.
>>>>>    [  227.516807] md/raid1:md0: Operation continuing on 1 devices.
>>>>
>>>> So how is this going to work for e.g. a case where the device
>>>> disappear due to resets or fabrics connection problems?  This now
>>>> directly teards down the device.
>>>>
>>> Yes, that is correct; the nshead will be removed once the last path is
>>> _removed_.
>>
>> The end result is also how non-multipath nvme behaves, so I think that's
>> what users have come to expect.
> 
> I'm not sure that is what users expect.  At least the SCSI multipath
> setups I've worked do not expect it and ensure the queue_if_no_path
> option is set.
> 
Yes, sure. And as I said, I'm happy to implement this option for NVMe, too.
But that is _not_ what this patch is about.

NVMe since day one has _removed_ the namespace if the controller goes 
away (ie if you do a PCI hotplug). So customer rightly expects this 
behaviour to continue.

And this is what the patch does; _aligning_ the behaviour from 
multipathed to non-multipathed controllers when the last path is gone.

Non-multipathed (ie CMIC==0) controllers will remove the namespace once 
the last _reference_ to that namespace drops (ie the PCI hotplug case).
Multipathed (ie CMIC!=0) controllers will remove the namespace once the 
last _opener_ goes away.
The refcount is long gone by that time.

>>> But key point here is that once the system finds itself in that
>>> situation it's impossible to recover, as the refcounts are messed.
>>> Even a manual connect call with the same parameter will _not_ restore
>>> operation, but rather result in a new namespace.
>>
>> I haven't looked at this yet, but is it really not possible to restore
>> the original namespace upon the reestablished connection?
> 
> It is possible, and in fact is what we do.
> 
It is _not_ once the namespace is mounted.
Or MD has claimed the device.
And the problem is that the refcount already _is_ zero, so we are 
already in teardown. We're just waiting for the reference to the gendisk 
to drop.
Which is never will, as we would have to umount (or detach) the device 
for that, but I/O is still pending which cannot be flushed, so that'll fail.
And if we try to connect the same namespace again, nvme_find_ns_head() 
will not return the existing ns_head as the refcount is zero.
Causing a new ns_head to be created.

If you manage to get this working with the current code please show me 
from the testcase in the description what we should have done differently.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare at suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



More information about the Linux-nvme mailing list