[LSF/MM TOPIC] irq affinity handling for high CPU count machines

Ming Lei ming.lei at redhat.com
Thu Feb 1 17:55:35 PST 2018


Hello Hannes,

On Thu, Feb 01, 2018 at 05:20:26PM +0100, Hannes Reinecke wrote:
> On 02/01/2018 04:05 PM, Ming Lei wrote:
> > Hello Hannes,
> > 
> > On Mon, Jan 29, 2018 at 10:08:43AM +0100, Hannes Reinecke wrote:
> >> Hi all,
> >>
> >> here's a topic which came up on the SCSI ML (cf thread '[RFC 0/2]
> >> mpt3sas/megaraid_sas: irq poll and load balancing of reply queue').
> >>
> >> When doing I/O tests on a machine with more CPUs than MSIx vectors
> >> provided by the HBA we can easily setup a scenario where one CPU is
> >> submitting I/O and the other one is completing I/O. Which will result in
> >> the latter CPU being stuck in the interrupt completion routine for
> >> basically ever, resulting in the lockup detector kicking in.
> > 
> > Today I am looking at one megaraid_sas related issue, and found
> > pci_alloc_irq_vectors(PCI_IRQ_AFFINITY) is used in the driver, so looks
> > each reply queue has been handled by more than one CPU if there are more
> > CPUs than MSIx vectors in the system, which is done by generic irq affinity
> > code, please see kernel/irq/affinity.c.
> > 
> > Also IMO each reply queue may be treated as blk-mq's hw queue, then
> > megaraid may benefit from blk-mq's MQ framework, but one annoying thing is
> > that both legacy and blk-mq path need to be handled inside driver.
> > 
> The megaraid driver is a really strange beast;, having layered two
> different interfaces (the 'legacy' MFI interface and that from from
> mpt3sas) on top of each other.
> I had been thinking of converting it to scsi-mq, too (as my mpt3sas
> patch finally went in), but I'm not sure if we can benefit from it as
> we're still be bound by the HBA-wide tag pool.

Actually current SCSI_MQ works at this mode of HBA-wide tag pool too,
please see scsi_host_queue_ready() which is called in scsi_queue_rq(),
same with scsi_mq_get_budget().

Seems it is weird for real MQ cases, even the tag is allocated from
per-hctx tags, the host wide queue depth still need to be respected,
finally it is just like HBA-wide tag pool.

That is something which need to discuss too.

Also I remembered you posted the patch for sharing tags among hctx,
that should help to convert reply queues into scsi_mq/blk_mq's hctx.

> It's on my todo list, albeit pretty far down :-)
> 
> >>
> >> How should these situations be handled?
> >> Should it be made the responsibility of the drivers, ensuring that the
> >> interrupt completion routine is terminated after a certain time?
> >> Should it be made the resposibility of the upper layers?
> >> Should it be the responsibility of the interrupt mapping code?
> >> Can/should interrupt polling be used in these situations?
> > 
> > Yeah, I guess interrupt polling may improve these situations, especially
> > KPTI introduces some extra cost in interrupt handling.
> > 
> The question is not so much if one should be doing irq polling, but
> rather if we can come up with some guidance or even infrastructure to
> make this happen automatically.
> Having to rely on individual drivers to get this right is probably not
> the best option.

Agree.

Thanks,
Ming



More information about the Linux-nvme mailing list