hch's native NVMe multipathing [was: Re: [PATCH 1/2] Don't blacklist nvme]
Mike Snitzer
snitzer at redhat.com
Fri Feb 17 06:43:47 PST 2017
On Fri, Feb 17 2017 at 4:04am -0500,
hch at infradead.org <hch at infradead.org> wrote:
> On Thu, Feb 16, 2017 at 01:21:29PM -0500, Mike Snitzer wrote:
> > multipath-tools has tables that specify all the defaults for a given
> > target backend. NVMe will just be yet another.
>
> No, if we get things right it won't. ALUA already got rid of most
> of the parameter people would have to set under normal conditions,
> and I plan to make sure the NVMe equivalent will do it for all
> parameters. I am active in the NVMe working group and will do my
> best to get there. There's a few others folks here that are more or
> less active there as well (Keith, Martin, Jens for example), so I
> think we have a chance.
>
> That beeing said Keith is right that we'll always have odd setups
> where we need to overrid things, and we will have to provide tunables
> for that. It's no different from any other kernel subsystem in that.
Before ALUA fixed all that vendor specific fragmentation there was the
even worse fragmentation where different vendors pushed multipathing
into their FC drivers. James correctly pushed them toward a generic
solution (and DM multipath was born). If every transport implements its
own multipathing then it'll be a more generic, yet very similar,
fragmentation problem.
But if your native NVMe multipathing really is factored such that the
actual IO fast path is implemented in block core, and transport specific
hooks are called out to as needed, then you've simply reimplement DM
multipath in block core.
Pretty weird place to invest _so_ much energy before you've fully
explored how unworkable DM multipath support for NVMe is. But I
digress.
More information about the Linux-nvme
mailing list