[PATCH] NVMe: Automatic namespace rescan

Brandon Schulz brandon.schulz at hgst.com
Sun May 17 08:04:35 PDT 2015

Agree with your decision that we probably don¹t want a bunch of zero
capacity drives if there are gaps in the namespace IDs.  I¹d like to see a
version of your original patch get in - we have been carrying a patch
against nvme-legacy internally that does a similar rescan-after-reset but
haven¹t had the time to prepare a patch against the current tree yet.

If someone deletes a namespace (vs. detaching it) why wouldn¹t you want to
remove the gendisk as well?  This question is  based on your comment in
the original patch: "If namespaces are deleted, this does not go so far as
to delete gendisks; instead, its capacity will be set to 0."  I believe
users would expect to see the device representing the namespace go away if
they deleted the namespace.


On 5/15/15, 4:59 PM, "Keith Busch" <keith.busch at intel.com> wrote:

>On Fri, 15 May 2015, Keith Busch wrote:
>> On Fri, 15 May 2015, Matthew Wilcox wrote:
>>> Pondering if it's worth keeping the list sorted so we can break out
>>> if the namespace isn't in the list?
>> The list is actually already sorted since this doesn't allow gaps in
>> NSIDs. If a namespace is not attached, the driver creates a 0 capacity
>> block device for it and appends it to the list, so it's always in
>> ascending order.
>> Is this a bad idea? Let's say a controller supports 128 namespaces,
>> but only NSID 128 is attached, we'd see 127 zero capacity /dev/nvme#n#
>> block devs.
>Decided we don't want a bunch of zero capacity drives if there are gaps
>in attached NSIDs. There could be millions after all. But it's not so
>simple since the nvmeq's hctx points to hctx from a single namespace's
>request_queue, and that's not right. I'll take a stab at decoupling that
>next week. In the mean time, this patch is dead.
>Linux-nvme mailing list
>Linux-nvme at lists.infradead.org

More information about the Linux-nvme mailing list