[PATCH 5/5] nvme/pci: Complete all stuck requests
Keith Busch
keith.busch at intel.com
Thu Feb 23 07:21:40 PST 2017
On Thu, Feb 23, 2017 at 04:06:51PM +0100, Christoph Hellwig wrote:
> I still don't understand it. nvme_dev_disable has no early return,
> and it does the nvme_start_freeze, nvme_wait_freeze and nvme_unfreeze
> calls under exactly the same conditionals:
>
> if (drain_queue) {
> if (shutdown)
> nvme_start_freeze(&dev->ctrl);
> nvme_stop_queues(&dev->ctrl);
> ...
> }
>
> ..
>
> if (drain_queue && shutdown) {
> nvme_start_queues(&dev->ctrl);
> nvme_wait_freeze(&dev->ctrl);
> nvme_unfreeze(&dev->ctrl);
> nvme_stop_queues(&dev->ctrl);
> }
>
> so where is the pairing for the unfreeze in nvme_reset_work
> coming from?
I thought this would be non-obvious, so I put this detailed commend just
before the unfreeze:
/*
* Waiting for frozen increases the freeze depth. Since we
* already start the freeze earlier in this function to stop
* incoming requests, we have to unfreeze after froze to get
* the depth back to the desired.
*/
Assuming we are starting with a freeze depth of 0, the nvme_start_freeze
gets us to 1. Then nvme_wait_freeze increases the freeze depth to 2
(blk_mq_freeze_wait is not exported), so we need to unfreeze after frozen
to get us back to 1. Then the nvme_reset_work does the final unfreeze
to get the depth back to 0 so new requests may enter.
More information about the Linux-nvme
mailing list