nvme-pci: Fix multiple races in nvme_setup_io_queues()

Keith Busch kbusch at kernel.org
Tue Jun 15 13:53:18 PDT 2021


On Mon, Jun 14, 2021 at 11:26:55PM -0700, Casey Chen wrote:
> (Please ignore the previous email, the call paths comparison shown in
> commit message should look better if you copy then paste in a code
> editor)

Your email client also mangles your patch, rendering it unable to apply.
If you're able to set up 'git send-email' from your development machine,
that will always format correctly for mailing list patch consumption.

I am aware of the race condition you've described. I never bothered with
it because of the unsual circumstances to hit it, but since you have
identified a test case, I agree it's time we address it.

> ---
>  drivers/nvme/host/pci.c | 33 +++++++++++++++++++++++++++------
>  1 file changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 3aa7245a505f..81e53aaaa77c 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1590,8 +1590,9 @@ static int nvme_create_queue(struct nvme_queue
> *nvmeq, int qid, bool polled)
>   goto release_cq;
> 
>   nvmeq->cq_vector = vector;
> - nvme_init_queue(nvmeq, qid);
> 
> + mutex_lock(&dev->shutdown_lock);

There doesn't seem to be a reason to wait for the lock here. A
mutex_try_lock() should be fine, and simply abandon queue initialization
if we locking fails.



More information about the Linux-nvme mailing list