[PATCH V2] nvme: free pre-allocated queue if create ioq goes wrong

Keith Busch keith.busch at intel.com
Thu Jan 18 03:31:14 PST 2018


On Thu, Jan 18, 2018 at 07:25:06PM +0900, Minwoo Im wrote:
> On Thu, Jan 18, 2018 at 2:27 PM, jianchao.wang
> <jianchao.w.wang at oracle.com> wrote:
> > Hi Minwoo
> 
> >> Think of the following scenario:
> > nvme_reset_work
> >   -> nvme_setup_io_queues
> >     -> nvme_create_io_queues
> >       -> nvme_free_queues
> >   -> nvme_kill_queues
> >     -> blk_set_queue_dying   // just freeze the queue here, but will not wait to be drained.
> >                                 not new requests come in, but maybe still residual requests in blk-mq queues.
> >     -> blk_mq_unquiesce_queue
> >
> > the queues are _unquiesced_ here, then the residual requests will be queued
> > and go through nvme_queue_rq. Then the freed nvme_queue structure will be accessed.
> > :)
> >
> > Thanks
> > Jianchao
> 
> Hi Jianchao,
> 
> First of all, I really appreciate letting me know the case.
> It seems no one updates the actual nr_hw_queues value and frees hctxs
> after nvme_kill_queues().
> If you don't mind, would you please tell me where hctxs are freed
> after nvme_kill_queues()?

The API doesn't let us set the nr_hw_queues to 0. We'd have to free the
tagset at that point, but we don't free it until the last open reference
is dropped. I can't seem to recall why that's necessary but I'll stare
at this a bit longer see if it makes sense. The memory the driver is
holding onto is not really a big deal either way.



More information about the Linux-nvme mailing list