[PATCHv3] nvme-mpath: delete disk after last connection

Sagi Grimberg sagi at grimberg.me
Tue May 4 20:54:14 BST 2021


>> As stated in the v3 review this is an incompatible change.  We'll need
>> the queue_if_no_path attribute first, and default it to on to keep
>> compatability.
>>
> 
> That is what I tried the last time, but the direction I got was to treat 
> both, NVMe-PCI and NVMe-oF identically:
> (https://lore.kernel.org/linux-nvme/34e5c178-8bc4-68d3-8374-fbc1b451b6e8@grimberg.me/) 

Yes, I'm not sure I understand your comment Christoph. This addresses an
issue with mdraid where hot unplug+replug does not restore the device to
the raid group (pci and fabrics alike), where before multipath this used
to work.

queue_if_no_path is a dm-multipath feature so I'm not entirely clear
what is the concern? mdraid on nvme (pci/fabrics) used to work a certain
way, with the introduction of nvme-mpath the behavior was broken (as far
as I understand from Hannes).

My thinking is that if we want queue_if_no_path functionality in nvme
mpath we should have it explicitly stated properly such that people
that actually need it will use it and have mdraid function correctly
again. Also, queue_if_no_path applies really if all the paths are
gone in the sense they are completely removed, and doesn't apply
to controller reset/reconnect.

I agree we should probably have queue_if_no_path attribute on the
mpath device, but it doesn't sound right to default it to true given
that it breaks mdraid stacking on top of it..



More information about the Linux-nvme mailing list