[PATCH] nvme: fix handling single range discard request
Ming Lei
ming.lei at redhat.com
Tue Mar 7 06:24:33 PST 2023
On Tue, Mar 07, 2023 at 02:31:48PM +0200, Sagi Grimberg wrote:
>
>
> On 3/7/23 14:14, Ming Lei wrote:
> > On Tue, Mar 07, 2023 at 01:39:27PM +0200, Sagi Grimberg wrote:
> > >
> > >
> > > On 3/6/23 23:49, Ming Lei wrote:
> > > > On Mon, Mar 06, 2023 at 04:21:08PM +0200, Sagi Grimberg wrote:
> > > > >
> > > > >
> > > > > On 3/4/23 01:13, Ming Lei wrote:
> > > > > > When investigating one customer report on warning in nvme_setup_discard,
> > > > > > we observed the controller(nvme/tcp) actually exposes
> > > > > > queue_max_discard_segments(req->q) == 1.
> > > > > >
> > > > > > Obviously the current code can't handle this situation, since contiguity
> > > > > > merge like normal RW request is taken.
> > > > > >
> > > > > > Fix the issue by building range from request sector/nr_sectors directly.
> > > > > >
> > > > > > Fixes: b35ba01ea697 ("nvme: support ranged discard requests")
> > > > > > Signed-off-by: Ming Lei <ming.lei at redhat.com>
> > > > > > ---
> > > > > > drivers/nvme/host/core.c | 28 +++++++++++++++++++---------
> > > > > > 1 file changed, 19 insertions(+), 9 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > > > > index c2730b116dc6..d4be525f8100 100644
> > > > > > --- a/drivers/nvme/host/core.c
> > > > > > +++ b/drivers/nvme/host/core.c
> > > > > > @@ -781,16 +781,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
> > > > > > range = page_address(ns->ctrl->discard_page);
> > > > > > }
> > > > > > - __rq_for_each_bio(bio, req) {
> > > > > > - u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
> > > > > > - u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
> > > > > > -
> > > > > > - if (n < segments) {
> > > > > > - range[n].cattr = cpu_to_le32(0);
> > > > > > - range[n].nlb = cpu_to_le32(nlb);
> > > > > > - range[n].slba = cpu_to_le64(slba);
> > > > > > + if (queue_max_discard_segments(req->q) == 1) {
> > > > > > + u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req));
> > > > > > + u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9);
> > > > > > +
> > > > > > + range[0].cattr = cpu_to_le32(0);
> > > > > > + range[0].nlb = cpu_to_le32(nlb);
> > > > > > + range[0].slba = cpu_to_le64(slba);
> > > > > > + n = 1;
> > > > > > + } else {
> > > > > > + __rq_for_each_bio(bio, req) {
> > > > > > + u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
> > > > > > + u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
> > > > > > +
> > > > > > + if (n < segments) {
> > > > > > + range[n].cattr = cpu_to_le32(0);
> > > > > > + range[n].nlb = cpu_to_le32(nlb);
> > > > > > + range[n].slba = cpu_to_le64(slba);
> > > > > > + }
> > > > > > + n++;
> > > > > > }
> > > > > > - n++;
> > > > > > }
> > > > > > if (WARN_ON_ONCE(n != segments)) {
> > > > >
> > > > >
> > > > > Maybe just set segments to min(blk_rq_nr_discard_segments(req),
> > > > > queue_max_discard_segments(req->q)) and let the existing code do
> > > > > its thing?
> > > >
> > > > What is the existing code for applying min()?
> > >
> > > Was referring to this:
> > > --
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index 3345f866178e..dbc402587431 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -781,6 +781,7 @@ static blk_status_t nvme_setup_discard(struct nvme_ns
> > > *ns, struct request *req,
> > > range = page_address(ns->ctrl->discard_page);
> > > }
> > >
> > > + segments = min(segments, queue_max_discard_segments(req->q));
> >
> > That can't work.
> >
> > In case of queue_max_discard_segments(req->q) == 1, the request still
> > can have more than one bios since the normal merge is taken for discard
> > IOs.
>
> Ah, I see, the bios are contiguous though right?
Yes, the merge is just like normal RW.
> We could add a contiguity check in the loop and conditionally
> increment n, but maybe that would probably be more complicated...
That is more complicated than this patch, and the same pattern
has been applied on virtio-blk.
Thanks,
Ming
More information about the Linux-nvme
mailing list