[PATCHv3] nvme: generate uevent once a multipath namespace is operational again
Hannes Reinecke
hare at suse.de
Mon May 17 23:59:09 PDT 2021
On 5/17/21 7:49 PM, Sagi Grimberg wrote:
>
>> When fast_io_fail_tmo is set I/O will be aborted while recovery is
>> still ongoing. This causes MD to set the namespace to failed, and
>> no futher I/O will be submitted to that namespace.
>>
>> However, once the recovery succeeds and the namespace becomes
>> operational again the NVMe subsystem doesn't send a notification,
>> so MD cannot automatically reinstate operation and requires
>> manual interaction.
>>
>> This patch will send a KOBJ_CHANGE uevent per multipathed namespace
>> once the underlying controller transitions to LIVE, allowing an automatic
>> MD reassembly with these udev rules:
>>
>> /etc/udev/rules.d/65-md-auto-re-add.rules:
>> SUBSYSTEM!="block", GOTO="md_end"
>>
>> ACTION!="change", GOTO="md_end"
>> ENV{ID_FS_TYPE}!="linux_raid_member", GOTO="md_end"
>> PROGRAM="/sbin/md_raid_auto_readd.sh $devnode"
>> LABEL="md_end"
>>
>> /sbin/md_raid_auto_readd.sh:
>>
>> MDADM=/sbin/mdadm
>> DEVNAME=$1
>>
>> export $(${MDADM} --examine --export ${DEVNAME})
>>
>> if [ -z "${MD_UUID}" ]; then
>> exit 1
>> fi
>>
>> UUID_LINK=$(readlink /dev/disk/by-id/md-uuid-${MD_UUID})
>> MD_DEVNAME=${UUID_LINK##*/}
>> export $(${MDADM} --detail --export /dev/${MD_DEVNAME})
>> if [ -z "${MD_METADATA}" ] ; then
>> exit 1
>> fi
>> if [ $(cat /sys/block/${MD_DEVNAME}/md/degraded) != 1 ]; then
>> echo "${MD_DEVNAME}: array not degraded, nothing to do"
>> exit 0
>> fi
>> MD_STATE=$(cat /sys/block/${MD_DEVNAME}/md/array_state)
>> if [ ${MD_STATE} != "clean" ] ; then
>> echo "${MD_DEVNAME}: array state ${MD_STATE}, cannot re-add"
>> exit 1
>> fi
>> MD_VARNAME="MD_DEVICE_dev_${DEVNAME##*/}_ROLE"
>> if [ ${!MD_VARNAME} = "spare" ] ; then
>> ${MDADM} --manage /dev/${MD_DEVNAME} --re-add ${DEVNAME}
>> fi
>
> Is this auto-readd stuff going to util-linux?
>
>>
>> Changes to v2:
>> - Add udev rules example to description
>> Changes to v1:
>> - use disk_uevent() as suggested by hch
>
> This belongs after the '---' separator..
>
>>
>> Signed-off-by: Hannes Reinecke <hare at suse.de>
>> ---
>> drivers/nvme/host/multipath.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/host/multipath.c
>> b/drivers/nvme/host/multipath.c
>> index 0551796517e6..ecc99bd5f8ad 100644
>> --- a/drivers/nvme/host/multipath.c
>> +++ b/drivers/nvme/host/multipath.c
>> @@ -100,8 +100,11 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl)
>> down_read(&ctrl->namespaces_rwsem);
>> list_for_each_entry(ns, &ctrl->namespaces, list) {
>> - if (ns->head->disk)
>> - kblockd_schedule_work(&ns->head->requeue_work);
>> + if (!ns->head->disk)
>> + continue;
>> + kblockd_schedule_work(&ns->head->requeue_work);
>> + if (ctrl->state == NVME_CTRL_LIVE)
>> + disk_uevent(ns->head->disk, KOBJ_CHANGE);
>> }
>
> I asked this on v1, is this only needed for mpath devices?
Yes; we need to send the KOBJ_CHANGE event on the mpath device as it's
not backed by hardware. The only non-multipathed devices I've seen so
far are PCI devices where events are generated by the PCI device itself.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
More information about the Linux-nvme
mailing list