[RFC] xen/riscv: per-CPU devid setup for Xen event channel IRQ on IMSIC

Jürgen Groß jgross at suse.com
Tue Apr 28 11:47:07 PDT 2026


On 28.04.26 18:40, Baptiste Le Duc wrote:
> Hi,
> 
> Thanks for your quick response.
> 
> I've implemented Xen event channels as a local interrupt, which works correctly 
> for Xen <-> dom0 communication.
> 
> However, this approach hits a limitation when dom0 needs to notify another guest 
> domain: running in VS-mode, dom0 has no access to the hvip CSR and therefore 
> cannot inject an IRQ_S_SOFTWARE interrupt into another guest directly as it can 
> only be done by the hypervisor from HS-mode.
> 
> Do you have any ideas to handle this ?

You need to issue an evtchn hypercall to the Xen hypervisor for sending an event
to another guest.

Juergen

> 
> Thanks in advance.
> 
> Regards,
> Baptiste
> 
> On 4/24/26 6:55 PM, Anup Patel wrote:
> 
>> On Fri, Apr 24, 2026 at 10:09 PM Baptiste Le Duc
>> <baptiste.leduc at etik.com> wrote:
>>> Hi,
>>>
>>> While adding Xen/RISC-V support, the guest event channel interrupt is
>>> allocated via irq_of_parse_and_map() against the IMSIC domain and, if
>>> we refer to the ARM implementation, it must
>>> be enabled/disabled per vCPU through enable_percpu_irq() /
>>> disable_percpu_irq() in the CPU hotplug callbacks.
>>>
>>> With IMSIC using handle_edge_irq (the upstream default),
>>> enable_percpu_irq() never clears IRQD_IRQ_DISABLED. That flag is set at
>>> irq_desc allocation time (irqdesc.c) and is only cleared by
>>> irq_startup(), which is called from __setup_irq() only when
>>> irq_settings_can_autoenable() returns true.
>>>
>>> irq_set_percpu_devid() sets IRQ_NOAUTOEN (via irq_set_percpu_devid_flags),
>>> so irq_startup() is intentionally skipped for percpu-devid IRQs.
>>> enable_percpu_irq() calls irq_percpu_enable() which does irq_enable/unmask
>>> on the chip but never touches IRQD_IRQ_DISABLED.
>>>
>>> Result: handle_edge_irq() hits irq_can_handle_actions() → checks
>>> irqd_irq_disabled() → returns false → IRQ silently dropped.
>>>
>>> This was confirmed by logs:
>>>
>>>     XEN_TRACE irq=12 percpu_enable cpu=0 IRQD_DISABLED=1
>>>     XEN_TRACE irq=12 handle_edge DROP IRQD_DISABLED=1 action=...
>>>
>>> What we tried
>>> -------------
>>> - request_irq() correctly works with upstream IMSIC which uses
>>> handle_edge_irq with no
>>> irq_set_percpu_devid() but it means we only can have one vCPU which will
>>> always handle the irq,
>>> and we don't want that.
>>>
>>> - Adding irq_set_percpu_devid() + switching to handle_percpu_devid_irq in
>>> imsic_irq_domain_alloc() fixes the Xen case but breaks all other IMSIC
>>> users (PCI MSI, platform devices) that call request_irq(), since
>>> request_threaded_irq() rejects IRQs marked _IRQ_PER_CPU_DEVID:
>>>
>>>     WARNING: CPU: 0 PID: 1 at kernel/irq/manage.c:2101
>>>     request_threaded_irq+0x80/0x12c
>>>
>>> Therefore, do you have any recommendations on how should I handle this
>>> case ?
>>>
>> The RISC-V IMSIC driver does not register per-CPU interrupts
>> rather it treats IDs across all CPUs as independent vectors and
>> picks the right vector for a device MSI based on availability + affinity.
>>
>> Only the RISC-V intc driver manages per-CPU interrupts so the
>> Xen guest event channel interrupt should be a local interrupt
>> managed by RISC-V intc driver.
>>
>> Regards,
>> Anup
> 
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_0xB0DE9DD628BF132F.asc
Type: application/pgp-keys
Size: 3683 bytes
Desc: OpenPGP public key
URL: <http://lists.infradead.org/pipermail/linux-riscv/attachments/20260428/86405850/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: OpenPGP digital signature
URL: <http://lists.infradead.org/pipermail/linux-riscv/attachments/20260428/86405850/attachment.sig>


More information about the linux-riscv mailing list