[PATCH 1/5] block: don't call blk_mq_delay_run_hw_queue() in case of BLK_STS_RESOURCE
Mike Snitzer
snitzer at redhat.com
Tue Sep 19 16:50:06 PDT 2017
On Tue, Sep 19 2017 at 7:25pm -0400,
Bart Van Assche <Bart.VanAssche at wdc.com> wrote:
> On Wed, 2017-09-20 at 06:44 +0800, Ming Lei wrote:
> > For this issue, it isn't same between SCSI and dm-rq.
> >
> > We don't need to run queue in .end_io of dm, and the theory is
> > simple, otherwise it isn't performance issue, and should be I/O hang.
> >
> > 1) every dm-rq's request is 1:1 mapped to SCSI's request
> >
> > 2) if there is any mapped SCSI request not finished, either
> > in-flight or in requeue list or whatever, there will be one
> > corresponding dm-rq's request in-flight
> >
> > 3) once the mapped SCSI request is completed, dm-rq's completion
> > path will be triggered and dm-rq's queue will be rerun because of
> > SCHED_RESTART in dm-rq
> >
> > So the hw queue of dm-rq has been run in dm-rq's completion path
> > already, right? Why do we need to do it again in the hot path?
>
> The measurement data in the description of patch 5/5 shows a significant
> performance regression for an important workload, namely random I/O.
> Additionally, the performance improvement for sequential I/O was achieved
> for an unrealistically low queue depth.
So you've ignored Ming's question entirely and instead decided to focus
on previous questions you raised to Ming that he ignored. This is
getting tedious.
Especially given that Ming said the first patch that all this fighting
has been over isn't even required to attain the improvements.
Ming, please retest both your baseline and patched setup with a
queue_depth of >= 32. Also, please do 3 - 5 runs to get a avg and std
dev across the runs.
> Sorry but given these measurement results I don't see why I should
> spend more time on this patch series.
Bart, I've historically genuinely always appreciated your review and
insight. But if your future "review" on this patchset would take the
form shown in this thread then please don't spend more time on it.
Mike
More information about the Linux-nvme
mailing list