[PATCH rfc] nvme: support io stats on the mpath device
Sagi Grimberg
sagi at grimberg.me
Tue Oct 25 08:58:19 PDT 2022
>>> make up the multipath device. Only the low-level driver can do that right now,
>>> so perhaps either call into the driver to get all the block_device parts, or
>>> the gendisk needs to maintain a list of those parts itself.
>>
>> I definitely don't think we want to propagate the device relationship to
>> blk-mq. But a callback to the driver also seems very niche to nvme
>> multipath and is also kinda messy to combine calculations like
>> iops/bw/latency accurately which depends on the submission distribution
>> to the bottom devices which we would need to track now.
>>
>> I'm leaning towards just moving forward with this, take the relatively
>> small hit, and if people absolutely care about the extra latency, then
>> they can disable it altogether (upper and/or bottom devices).
>
> So looking at the patches I'm really not a big fan of the extra
> accounting calls, and especially the start_time field in the
> nvme_request
Don't love it either.
> and even more so the special start/end calls in all
> the transport drivers.
The end is centralized and the start part has not sprinkled to
the drivers. I don't think its bad.
> the stats sysfs attributes already have the entirely separate
> blk-mq vs bio based code pathes. So I think having a block_device
> operation that replaces part_stat_read_all which allows nvme to
> iterate over all pathes and collect the numbers would seem
> a lot nicer. There might be some caveats like having to stash
> away the numbers for disappearing paths, though.
You think this is better? really? I don't agree with you, I think its
better to pay a small cost than doing this very specialized thing that
will only ever be used for nvme-mpath.
More information about the Linux-nvme
mailing list