[PATCH for-next 4/4] nvme-multipath: add multipathing for uring-passthrough commands

Hannes Reinecke hare at suse.de
Wed Jul 13 07:07:22 PDT 2022


On 7/13/22 15:41, Sagi Grimberg wrote:
> 
>>>>>>> Maybe the solution is to just not expose a /dev/ng for the mpath 
>>>>>>> device
>>>>>>> node, but only for bottom namespaces. Then it would be completely
>>>>>>> equivalent to scsi-generic devices.
>>>>>>>
>>>>>>> It just creates an unexpected mix of semantics of best-effort
>>>>>>> multipathing with just path selection, but no requeue/failover...
>>>>>>
>>>>>> Which is exactly the same semanics as SG_IO on the dm-mpath nodes.
>>>>>
>>>>> I view uring passthru somewhat as a different thing than sending SG_IO
>>>>> ioctls to dm-mpath. But it can be argued otherwise.
>>>>>
>>>>> BTW, the only consumer of it that I'm aware of commented that he
>>>>> expects dm-mpath to retry SG_IO when dm-mpath retry for SG_IO 
>>>>> submission
>>>>> was attempted (https://www.spinics.net/lists/dm-devel/msg46924.html).
>>>>>
>>>>>  From Paolo:
>>>>> "The problem is that userspace does not have a way to direct the 
>>>>> command to a different path in the resubmission. It may not even 
>>>>> have permission to issue DM_TABLE_STATUS, or to access the /dev 
>>>>> nodes for the underlying paths, so without Martin's patches SG_IO 
>>>>> on dm-mpath is basically unreliable by design."
>>>>>
>>>>> I didn't manage to track down any followup after that email though...
>>>>>
>>>> I did; 'twas me who was involved in the initial customer issue 
>>>> leading up to that.
>>>>
>>>> Amongst all the other issue we've found the prime problem with SG_IO 
>>>> is that it needs to be directed to the 'active' path.
>>>> For the device-mapper has a distinct callout (dm_prepare_ioctl), 
>>>> which essentially returns the current active path device. And then 
>>>> the device-mapper core issues the command on that active path.
>>>>
>>>> All nice and good, _unless_ that command triggers an error.
>>>> Normally it'd be intercepted by the dm-multipath end_io handler, and 
>>>> would set the path to offline.
>>>> But as ioctls do not use the normal I/O path the end_io handler is 
>>>> never called, and further SG_IO calls are happily routed down the 
>>>> failed path.
>>>>
>>>> And the customer had to use SG_IO (or, in qemu-speak, LUN 
>>>> passthrough) as his application/filesystem makes heavy use of 
>>>> persistent reservations.
>>>
>>> How did this conclude Hannes?
>>
>> It didn't. The proposed interface got rejected, and now we need to 
>> come up with an alternative solution.
>> Which we haven't found yet.
> 
> Lets assume for the sake of discussion, had dm-mpath set a path to be
> offline on ioctl errors, what would qemu do upon this error? blindly
> retry? Until When? Or would qemu need to learn about the path tables in
> order to know when there is at least one online path in order to retry?
> 
IIRC that was one of the points why it got rejected.
Ideally we would return an errno indicating that the path had failed, 
but further paths are available, so a retry is in order.
Once no paths are available qemu would be getting a different error 
indicating that all paths are failed.

But we would be overloading a new meaning to existing error numbers, or 
even inventing our own error numbers. Which makes it rather awkward to use.

Ideally we would be able to return this as the SG_IO status, as that is 
well capable of expressing these situations. But then we would need to 
parse and/or return the error ourselves, essentially moving sg_io 
funtionality into dm-mpath. Also not what one wants.

> What is the model that a passthru consumer needs to follow when
> operating against a mpath device?

The model really is that passthru consumer needs to deal with these errors:
- No error (obviously)
- I/O error (error status will not change with a retry)
- Temporary/path related error (error status might change with a retry)

Then the consumer can decide whether to invoke a retry (for the last 
class), or whether it should pass up that error, as maybe there are 
applications with need a quick response time and can handle temporary 
failures (or, in fact, want to be informed about temporary failures).

IE the 'DNR' bit should serve nicely here, keeping in mind that we might 
need to 'fake' an NVMe error status if the connection is severed.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare at suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer



More information about the Linux-nvme mailing list