[PATCHv3] nvme: generate uevent once a multipath namespace is operational again

Sagi Grimberg sagi at grimberg.me
Tue May 18 12:04:25 PDT 2021


>>>>>>> diff --git a/drivers/nvme/host/multipath.c
>>>>>>> b/drivers/nvme/host/multipath.c
>>>>>>> index 0551796517e6..ecc99bd5f8ad 100644
>>>>>>> --- a/drivers/nvme/host/multipath.c
>>>>>>> +++ b/drivers/nvme/host/multipath.c
>>>>>>> @@ -100,8 +100,11 @@ void nvme_kick_requeue_lists(struct nvme_ctrl
>>>>>>> *ctrl)
>>>>>>>         down_read(&ctrl->namespaces_rwsem);
>>>>>>>         list_for_each_entry(ns, &ctrl->namespaces, list) {
>>>>>>> -        if (ns->head->disk)
>>>>>>> -            kblockd_schedule_work(&ns->head->requeue_work);
>>>>>>> +        if (!ns->head->disk)
>>>>>>> +            continue;
>>>>>>> +        kblockd_schedule_work(&ns->head->requeue_work);
>>>>>>> +        if (ctrl->state == NVME_CTRL_LIVE)
>>>>>>> +            disk_uevent(ns->head->disk, KOBJ_CHANGE);
>>>>>>>         }
>>>>>>
>>>>>> I asked this on v1, is this only needed for mpath devices?
>>>>>
>>>>> Yes; we need to send the KOBJ_CHANGE event on the mpath device as it's
>>>>> not backed by hardware. The only non-multipathed devices I've seen so
>>>>> far are PCI devices where events are generated by the PCI device
>>>>> itself.
>>>>
>>>> And for fabrics?
>>>
>>> No events whatsoever.
>>> Hence this patch.
>>
>> Non-multipath fabrics I meant
> 
> I know. As said, I've never seen them. Did you?
> 
> In fact, I wouldn't be surprised if that would open a completely
> different can of worms.

I've seen such, but I'm fine with ignoring them...



More information about the Linux-nvme mailing list