[LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers
Hannes Reinecke
hare at suse.de
Wed Jan 18 07:39:19 PST 2017
On 01/18/2017 04:16 PM, Johannes Thumshirn wrote:
> On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote:
>>
>>> Hannes just spotted this:
>>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>> const struct blk_mq_queue_data *bd)
>>> {
>>> [...]
>>> __nvme_submit_cmd(nvmeq, &cmnd);
>>> nvme_process_cq(nvmeq);
>>> spin_unlock_irq(&nvmeq->q_lock);
>>> return BLK_MQ_RQ_QUEUE_OK;
>>> out_cleanup_iod:
>>> nvme_free_iod(dev, req);
>>> out_free_cmd:
>>> nvme_cleanup_cmd(req);
>>> return ret;
>>> }
>>>
>>> So we're draining the CQ on submit. This of cause makes polling for
>>> completions in the IRQ handler rather pointless as we already did in the
>>> submission path.
>>
>> I think you missed:
>> http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
>
> I indeed did, thanks.
>
But it doesn't help.
We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.
Try this:
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b4b32e6..e2dd9e2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -623,6 +623,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
}
__nvme_submit_cmd(nvmeq, &cmnd);
spin_unlock(&nvmeq->sq_lock);
+ disable_irq_nosync(nvmeq_irq(irq));
+ irq_poll_sched(&nvmeq->iop);
return BLK_MQ_RQ_QUEUE_OK;
out_cleanup_iod:
nvme_free_iod(dev, req);
That should avoid the first interrupt, and with a bit of lock reduce the
number of interrupts _drastically_.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare at suse.de +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
More information about the Linux-nvme
mailing list