nvme/pcie hot plug results in /dev name change

Keith Busch kbusch at kernel.org
Tue Feb 14 08:17:15 PST 2023


On Tue, Feb 14, 2023 at 08:04:22AM +0800, Ming Lei wrote:
> On Mon, Feb 13, 2023 at 09:32:35AM -0700, Keith Busch wrote:
> > On Mon, Feb 13, 2023 at 04:01:03PM +0200, Sagi Grimberg wrote:
> > > > Also not sure if this is going to work out, but it looks like a good start to
> > > > me.
> > > > 
> > > > Instead of user pinning the virtual device so that it never goes away though, I
> > > > considered a user tunable "device missing delay" parameter to debounce link
> > > > events. New IOs would be deferred to the requeue_list while the timer is
> > > > active, and then EIO'ed if the timer expires without a successful LIVE
> > > > controller attachment. The use cases I'm considering are short bounces from
> > > > transient link resets, so I'd expect timers to be from a few seconds to maybe a
> > > > minute.
> > > 
> > > Isn't this equivalent to dm-mpath queue_if_no_path or no_path_timeout ?
> > 
> > Similiar, but generic to non-multipath devices.
> >  
> > > We can keep the mpath device around, but if not, what is the desired
> > > behavior from the upper layers?
> > 
> > I don't think we're looking for any behavioral changes in the upper layers.
>  
> That also means no matter if the nvme mpath layer is added or not, upper
> layer still has to handle this kind of failure, so what is the
> difference made from the added nvme-mpath?

I don't understand. There's no difference here for the upper layers. It's just
a different timing from when a disconnect starts failing IO. The upper layers
can keep doing what they're already doing in either case.



More information about the Linux-nvme mailing list