[PATCH 5/5] nvme/pci: Complete all stuck requests

Keith Busch keith.busch at intel.com
Tue Feb 21 07:57:04 PST 2017


On Mon, Feb 20, 2017 at 11:05:15AM +0100, Christoph Hellwig wrote:
> > > > +		 * If we are resuming from suspend, the queue was set to freeze
> > > > +		 * to prevent blk-mq's hot CPU notifier from getting stuck on
> > > > +		 * requests that entered the queue that NVMe had quiesced. Now
> > > > +		 * that we are resuming and have notified blk-mq of the new h/w
> > > > +		 * context queue count, it is safe to unfreeze the queues.
> > > > +		 */
> > > > +		if (was_suspend)
> > > > +			nvme_unfreeze(&dev->ctrl);
> > > 
> > > And this change I don't understand at all.  It doesn't seem to pair
> > > up with anything else in the patch.
> > 
> > If we had done a controller shutdown, as would happen on a system suspend,
> > the resume needs to restore the queue freeze depth. That's all this
> > is doing.
> 
> I've spent tons of times trying to understand this, but still fail
> to.  Where is the nvme_start_freeze / nvme_wait_freeze that this
> pairs with?

This is for suspend/resume. The freeze start is done during the suspend
phase, and the unfreeze on resume. Power management calls nvme_suspend,
which calls nvme_dev_disable with 'suspend == true', and that sets the
freeze depth. Power management later calls nvme_resume, which queues
the reset work that will observe 'was_suspend == true', and that pairs
the unfreeze.



More information about the Linux-nvme mailing list