[PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed
Christoph Hellwig
hch at lst.de
Wed Jul 21 13:32:59 PDT 2021
On Wed, Jul 21, 2021 at 10:14:25PM +0200, Thomas Gleixner wrote:
> https://lore.kernel.org/r/87o8bxcuxv.ffs@nanos.tec.linutronix.de
>
> TLDR: virtio allocates ONE irq on msix_enable() and then when the guest
> actually unmasks another entry (e.g. request_irq()), it tears down the
> allocated one and set's up two. On the third one this repeats ....
>
> There are only two options:
>
> 1) allocate everything upfront, which is undesired
> 2) append entries, which might need locking, but I'm still trying to
> avoid that
>
> There is another problem vs. vector exhaustion which can't be fixed that
> way, but that's a different story.
FTI, NVMe is similar. We need one IRQ to setup the admin queue,
which is used to query/set how many I/O queues are supported. Just
two steps though and not unbound.
More information about the Linux-nvme
mailing list