[PATCHv3] nvme-mpath: delete disk after last connection

Keith Busch kbusch at kernel.org
Thu May 6 03:50:27 BST 2021


On Wed, May 05, 2021 at 01:40:29PM -0700, Sagi Grimberg wrote:
> 
> > > > > > As stated in the v3 review this is an incompatible change.  We'll need
> > > > > > the queue_if_no_path attribute first, and default it to on to keep
> > > > > > compatability.
> > > > > > 
> > > > > 
> > > > > That is what I tried the last time, but the direction I got was to treat
> > > > > both, NVMe-PCI and NVMe-oF identically:
> > > > > (https://lore.kernel.org/linux-nvme/34e5c178-8bc4-68d3-8374-fbc1b451b6e8@grimberg.me/)
> > > > 
> > > > Yes, I'm not sure I understand your comment Christoph. This addresses an
> > > > issue with mdraid where hot unplug+replug does not restore the device to
> > > > the raid group (pci and fabrics alike), where before multipath this used
> > > > to work.
> > > > 
> > > > queue_if_no_path is a dm-multipath feature so I'm not entirely clear
> > > > what is the concern? mdraid on nvme (pci/fabrics) used to work a certain
> > > > way, with the introduction of nvme-mpath the behavior was broken (as far
> > > > as I understand from Hannes).
> > > > 
> > > > My thinking is that if we want queue_if_no_path functionality in nvme
> > > > mpath we should have it explicitly stated properly such that people
> > > > that actually need it will use it and have mdraid function correctly
> > > > again. Also, queue_if_no_path applies really if all the paths are
> > > > gone in the sense they are completely removed, and doesn't apply
> > > > to controller reset/reconnect.
> > > > 
> > > > I agree we should probably have queue_if_no_path attribute on the
> > > > mpath device, but it doesn't sound right to default it to true given
> > > > that it breaks mdraid stacking on top of it..
> > > 
> > > If you want "queue_if_no_path" behavior, can't you just set really high
> > > reconnect_delay and ctrl_loss_tmo values? That prevents the path from
> > > being deleted while it is unreachable, then restart IO on the existing
> > > path once connection is re-established.
> > > 
> > Precisely my thinking.
> > We _could_ add a queue_if_no_path attribute, but we can also achieve the
> > same behaviour by setting the ctrl_loss_tmo value to infinity.
> > Provided we can change it on the fly, though; but it not that's easily
> > fixed.
> > 
> > In fact, that's what we recommend to our customers to avoid the bug
> > fixed by this patch.
> 
> You can change ctrl_loss_tmo on the fly. How does that address the
> issue? the original issue is ctrl_loss_tmo expires for fabrics? or
> pci unplug (which ctrl_loss_tmo does not apply to it)?

PCI aside, the ctrl_loss_tmo parameter can be used to have IO queue on
the head's bio_list whilst no path is available. That's what
"queue_if_no_path" would provide too, right?

The main difference AFAICS is that "ctrl_loss_tmo" applies to individual
paths, where "queue_if_no_path" would apply to the "head", but I'm not
sure that detail matters.

Regarding PCI, we could add the property there, but disconnecting a
local PCIe device does seem like a different scenario than losing
connection to a remote target.



More information about the Linux-nvme mailing list