[PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed
John Garry
john.garry at huawei.com
Mon Jul 19 00:51:22 PDT 2021
On 15/07/2021 13:08, Ming Lei wrote:
> irq vector allocation with managed affinity may be used by driver, and
> blk-mq needs this info because managed irq will be shutdown when all
> CPUs in the affinity mask are offline.
>
> The info of using managed irq is often produced by drivers(pci subsystem,
> platform device, ...), and it is consumed by blk-mq, so different subsystems
> are involved in this info flow
>
> Address this issue by adding one field of .irq_affinity_managed into
> 'struct device'.
>
> Suggested-by: Christoph Hellwig <hch at lst.de>
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
Did you consider that for PCI device we effectively have this info already:
bool dev_has_managed_msi_irq(struct device *dev)
{
struct msi_desc *desc;
list_for_each_entry(desc, dev_to_msi_list(dev), list) {
if (desc->affinity && desc->affinity->is_managed)
return true;
}
return false;
}
Thanks,
John
> ---
> drivers/base/platform.c | 7 +++++++
> drivers/pci/msi.c | 3 +++
> include/linux/device.h | 1 +
> 3 files changed, 11 insertions(+)
>
> diff --git a/drivers/base/platform.c b/drivers/base/platform.c
> index 8640578f45e9..d28cb91d5cf9 100644
> --- a/drivers/base/platform.c
> +++ b/drivers/base/platform.c
> @@ -388,6 +388,13 @@ int devm_platform_get_irqs_affinity(struct platform_device *dev,
> ptr->irq[i], ret);
> goto err_free_desc;
> }
> +
> + /*
> + * mark the device as irq affinity managed if any irq affinity
> + * descriptor is managed
> + */
> + if (desc[i].is_managed)
> + dev->dev.irq_affinity_managed = true;
> }
>
> devres_add(&dev->dev, ptr);
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index 3d6db20d1b2b..7ddec90b711d 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -1197,6 +1197,7 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> if (flags & PCI_IRQ_AFFINITY) {
> if (!affd)
> affd = &msi_default_affd;
> + dev->dev.irq_affinity_managed = true;
> } else {
> if (WARN_ON(affd))
> affd = NULL;
> @@ -1215,6 +1216,8 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> return nvecs;
> }
>
> + dev->dev.irq_affinity_managed = false;
> +
> /* use legacy IRQ if allowed */
> if (flags & PCI_IRQ_LEGACY) {
> if (min_vecs == 1 && dev->irq) {
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 59940f1744c1..9ec6e671279e 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -569,6 +569,7 @@ struct device {
> #ifdef CONFIG_DMA_OPS_BYPASS
> bool dma_ops_bypass : 1;
> #endif
> + bool irq_affinity_managed : 1;
> };
>
> /**
>
More information about the Linux-nvme
mailing list