[PATCHv3] nvme: generate uevent once a multipath namespace is operational again
Hannes Reinecke
hare at suse.de
Tue May 18 11:09:46 PDT 2021
On 5/18/21 8:00 PM, Sagi Grimberg wrote:
>
>>>> diff --git a/drivers/nvme/host/multipath.c
>>>> b/drivers/nvme/host/multipath.c
>>>> index 0551796517e6..ecc99bd5f8ad 100644
>>>> --- a/drivers/nvme/host/multipath.c
>>>> +++ b/drivers/nvme/host/multipath.c
>>>> @@ -100,8 +100,11 @@ void nvme_kick_requeue_lists(struct nvme_ctrl
>>>> *ctrl)
>>>> down_read(&ctrl->namespaces_rwsem);
>>>> list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>> - if (ns->head->disk)
>>>> - kblockd_schedule_work(&ns->head->requeue_work);
>>>> + if (!ns->head->disk)
>>>> + continue;
>>>> + kblockd_schedule_work(&ns->head->requeue_work);
>>>> + if (ctrl->state == NVME_CTRL_LIVE)
>>>> + disk_uevent(ns->head->disk, KOBJ_CHANGE);
>>>> }
>>>
>>> I asked this on v1, is this only needed for mpath devices?
>>
>> Yes; we need to send the KOBJ_CHANGE event on the mpath device as it's
>> not backed by hardware. The only non-multipathed devices I've seen so
>> far are PCI devices where events are generated by the PCI device itself.
>
> And for fabrics?
No events whatsoever.
Hence this patch.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)
More information about the Linux-nvme
mailing list