[PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed
Thomas Gleixner
tglx at linutronix.de
Wed Jul 21 13:14:25 PDT 2021
On Wed, Jul 21 2021 at 09:24, Christoph Hellwig wrote:
> On Wed, Jul 21, 2021 at 09:20:00AM +0200, Thomas Gleixner wrote:
>> > Just walking the list seems fine to me given that this is not a
>> > performance criticial path. But what are the locking implications?
>>
>> At the moment there are none because the list is initialized in the
>> setup path and never modified afterwards. Though that might change
>> sooner than later to fix the virtio wreckage vs. MSI-X.
>
> What is the issue there? Either way, if we keep the helper in the
> IRQ code it should be easy to spot for anyone adding the locking.
https://lore.kernel.org/r/87o8bxcuxv.ffs@nanos.tec.linutronix.de
TLDR: virtio allocates ONE irq on msix_enable() and then when the guest
actually unmasks another entry (e.g. request_irq()), it tears down the
allocated one and set's up two. On the third one this repeats ....
There are only two options:
1) allocate everything upfront, which is undesired
2) append entries, which might need locking, but I'm still trying to
avoid that
There is another problem vs. vector exhaustion which can't be fixed that
way, but that's a different story.
Thanks,
tglx
More information about the Linux-nvme
mailing list