[PATCH] NVMe: Automatic namespace rescan
Keith Busch
keith.busch at intel.com
Fri May 15 08:06:43 PDT 2015
On Fri, 15 May 2015, Matthew Wilcox wrote:
> On Thu, May 14, 2015 at 02:01:47PM -0600, Keith Busch wrote:
>> @@ -307,9 +307,16 @@ static void async_req_completion(struct nvme_queue *nvmeq, void *ctx,
>
> I don't think we want the dev_warn() and the dev_info() for the same event.
> How about ...
>
> if (status != NVME_SC_SUCCESS)
> return;
>
> switch (result & 0xff07) {
> case NVME_AER_NOTICE_NS_CHANGED:
> dev_info(nvmeq->q_dmadev, "rescanning\n");
> schedule_work(&nvmeq->dev->scan_work);
> default:
> dev_warn(nvmeq->q_dmadev, "async event result %08x\n", result);
> }
>
> with NVME_AER_NOTICE_NS_CHANGED being an enum with value 0x0002
Brilliant!
>> +static struct nvme_ns *nvme_find_ns(struct nvme_dev *dev, unsigned nsid)
>> +{
>> + struct nvme_ns *ns;
>> +
>> + list_for_each_entry(ns, &dev->namespaces, list)
>> + if (ns->ns_id == nsid)
>> + return ns;
>> + return NULL;
>> +}
>
> Pondering if it's worth keeping the list sorted so we can break out early
> if the namespace isn't in the list?
The list is actually already sorted since this doesn't allow gaps in
NSIDs. If a namespace is not attached, the driver creates a 0 capacity
block device for it and appends it to the list, so it's always in
ascending order.
Is this a bad idea? Let's say a controller supports 128 namespaces,
but only NSID 128 is attached, we'd see 127 zero capacity /dev/nvme#n#
block devs.
More information about the Linux-nvme
mailing list