[PATCH v12 00/25] Linux RISC-V AIA Support

Björn Töpel bjorn at kernel.org
Wed Feb 7 01:37:00 PST 2024


Anup Patel <apatel at ventanamicro.com> writes:

>> Nope. Same mechanics on x86 -- the cleanup has to be done one the
>> originating core. What I asked was "what about using a timer instead of
>> an IPI". I think this was up in the last rev as well?
>>
>> Check out commit bdc1dad299bb ("x86/vector: Replace
>> IRQ_MOVE_CLEANUP_VECTOR with a timer callback") Specifically, the
>> comment about lost interrupts, and the rational for keeping the original
>> target active until there's a new interrupt on the new cpu.
>
> Trying timer interrupt is still TBD on my side because with v12
> my goal was to implement per-device MSI domains. Let me
> explore timer interrupts for v13.

OK!

>> >> I wonder if this clean up is less intrusive, and you just need to
>> >> perform what's in the per-list instead of dealing with the
>> >> ids_enabled_bitmap? Maybe we can even remove that bitmap as well. The
>> >> chip_data/desc has that information. This would mean that
>> >> imsic_local_priv() would only have the local vectors (chip_data), and
>> >> a cleanup list/timer.
>> >>
>> >> My general comment is that instead of having these global id-tracking
>> >> structures, use the matrix together with some desc/chip_data local
>> >> data, which should be sufficient.
>> >
>> > The "ids_enabled_bitmap", "dummy hwirqs" and private imsic_vectors
>> > are required since the matrix allocator only manages allocation of
>> > per-CPU IDs.
>>
>> The information in ids_enabled_bitmap is/could be inherent in
>> imsic_local_priv.vectors (guess what x86 does... ;-)).
>>
>> Dummy hwirqs could be replaced with the virq.
>>
>> Hmm, seems like we're talking past each other, or at least I get the
>> feeling I can't get my opinions out right. I'll try to do a quick PoC,
>> to show you what I mean. That's probably easier than just talking about
>> it. ...and maybe I'll come realizing I'm all wrong!
>
> I suggest to wait for my v13 and try something on top of that
> otherwise we might duplicate efforts.

OK!

>> > I did not see any PCIe or platform device requiring this kind of
>> > reservation. Any examples ?
>>
>> It's not a requirement. Some devices allocate a gazillion interrupts
>> (NICs with many QoS queues, e.g.), but only activate a subset (via
>> request_irq()). A system using these kind of devices might run out of
>> interrupts.
>
> I don't see how this is not possible currently.

Again, this is something we can improve on later. But, this
implementation activates the interrupt at allocation time, no?

>> Problems you run into once you leave the embedded world, pretty much.
>>
>> >> * Handle managed interrupts
>> >
>> > Any examples of managed interrupts in the RISC-V world ?
>>
>> E.g. all nvme drives: nvme_setup_irqs(), and I'd assume contemporary
>> netdev drivers would use it. Typically devices with per-cpu queues.
>
> We have tested with NVMe devices, e1000e, VirtIO-net, etc and I did
> not see any issue.
>
> We can always add new features as separate incremental series as long
> as there is clear use-cause backed by real-world devices.

Agreed. Let's not feature creep.


Björn



More information about the linux-arm-kernel mailing list