答复: [PATCH v2] irqchip/sifive-plic: ensure interrupt is enable before EOI

Yan Zheng(严政) zhengyan at asrmicro.com
Mon Jul 22 00:36:55 PDT 2024


> On Mon, Jun 24, 2024 at 11:35:23AM +0000, zhengyan wrote:
> > RISC-V PLIC cannot "end-of-interrupt" (EOI) disabled interrupts, as
> > explained in the description of Interrupt Completion in the PLIC spec:
> > "The PLIC signals it has completed executing an interrupt handler by
> > writing the interrupt ID it received from the claim to the
> > claim/complete  register. The PLIC does not check whether the
> > completion ID is the same  as the last claim ID for that target. If
> > the completion ID does not match  an interrupt source that *is
> > currently enabled* for the target, the  completion is silently ignored."
> >
> >  Commit 9c92006b896c ("irqchip/sifive-plic: Enable interrupt if needed
> > before EOI")  ensured that EOI is enable when irqd IRQD_IRQ_DISABLED
> > is set, before  EOI
> >
> >  Commit 69ea463021be ("irqchip/sifive-plic: Fixup EOI failed when
> > masked")  ensured that EOI is successful by enabling interrupt first, before
> EOI.
> >
> >  Commit a1706a1c5062 ("irqchip/sifive-plic: Separate the enable and
> > mask
> >  operations") removed the interrupt enabling code from the previous
> > commit, because it assumes that interrupt should already be enabled at
> > the  point of EOI.
> >
> > However, here still miss a corner case that if SMP is enabled. When
> > someone needs to set affinity from a cpu to another the original cpu
> > when handle the EOI meanwhile the IE is disabled by plic_set_affinity
> >
> > For example, broadcast tick is working,
> > cpu0 is about to response, cpu1 is the next.
> > 1. cpu0 responses the timer irq, read the claim REG, do timer isr event.
> > 2. during the timer isr it will set next event
> > tick_broadcast_set_event -> irq_set_affinity->xxx-> plic_set_affinity
> > -> plic_irq_enable 3. in plic_set_affinity disable cpu0's IE and
> > enable cpu1'IE 4. cpu0 do the write claim to finish this irq, while
> > cpu0's IE is disabled, left an active state in plic.
> >
> > So this patch ensure that won't happened
> >
> > Signed-off-by: zhengyan <zhengyan at asrmicro.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c
> > b/drivers/irqchip/irq-sifive-plic.c
> > index 9e22f7e378f5..815ce8aa28f1 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -149,8 +149,10 @@ static void plic_irq_mask(struct irq_data *d)
> > static void plic_irq_eoi(struct irq_data *d)  {
> >  	struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
> > +	void __iomem *reg = handler->enable_base + (d->hwirq / 32) *
> sizeof(u32);
> > +	u32 hwirq_mask = 1 << (d->hwirq % 32);
> >
> > -	if (unlikely(irqd_irq_disabled(d))) {
> > +	if (unlikely((readl(reg) & hwirq_mask) == 0)) {
> >  		plic_toggle(handler, d->hwirq, 1);
> >  		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
> >  		plic_toggle(handler, d->hwirq, 0);
> 
> I think this patch is fine.
> 
> But, I don't like reading hardware registers in the interrupt hot path. It may
> slow things down. Also this patch doesn't allow moving the if condition out of
> this plic_irq_eoi() function into the enabling function (I have been thinking
> about doing that for some time, but too lazy to get to it).
> 
> I *may* have something better.
> 
> From the specification:
> "The PLIC signals it has completed executing an interrupt handler by writing
> the interrupt ID it received from the claim to the claim/complete register.
> The PLIC **does not check** whether the completion ID is the same as the
> last claim ID for that target. If the completion ID does not match an interrupt
> source that is currently enabled for the target, the completion is silently
> ignored."
> 
> Note what I "highlighed": the irq number written back does not have to
> match the irq number last claimed for the CPU. If I interpret this correctly,
> this means *any* claim/complete register can be used to complete the
> interrupt.
> 
> So, my idea: since irq affinity setting still leaves at least 1 CPU with the
> interrupt enabled; the claim/complete register for that enabled CPU can be
> used for completing interrupt (instead of the original one used for claiming).
> This would avoid some hardware register access in the hot path.
> Also allows another optimization of moving the if condition out of the EOI
> function.
> 
> Something like the patch below. To apply this one cleanly, another patch
> must be applied first:
> https://lore.kernel.org/linux-riscv/20240703072659.1427616-1-
> namcao at linutronix.de/
> 
> What I am still a bit unsure about is whether my interpretation of the
> specification is correct. The patch works for my Visionfive 2 board, so the
> question is whether this patch is relying on "undefined behavior", or this is
> really what the spec means. Drew Barbier <drew at sifive.com> seems to be
> the one who wrote that. Do you mind confirming my interpretation?
> 
> Best regards,
> Nam
> 
I confirm it, It works good on my platform as well. 
Looks like " *any* claim/complete register can be used to complete" is correct.

> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> index f30bdb94ceeb..117ff9f1c982 100644
> --- a/drivers/irqchip/irq-sifive-plic.c
> +++ b/drivers/irqchip/irq-sifive-plic.c
> @@ -69,6 +69,7 @@ struct plic_priv {
>  	void __iomem *regs;
>  	unsigned long plic_quirks;
>  	unsigned int nr_irqs;
> +	void __iomem **complete;
>  	unsigned long *prio_save;
>  };
> 
> @@ -149,13 +150,14 @@ static void plic_irq_mask(struct irq_data *d)  static
> void plic_irq_eoi(struct irq_data *d)  {
>  	struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
> +	struct plic_priv *priv = irq_data_get_irq_chip_data(d);
> 
>  	if (unlikely(irqd_irq_disabled(d))) {
>  		plic_toggle(handler, d->hwirq, 1);
> -		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
> +		writel(d->hwirq, priv->complete[d->hwirq]);
>  		plic_toggle(handler, d->hwirq, 0);
>  	} else {
> -		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
> +		writel(d->hwirq, priv->complete[d->hwirq]);
>  	}
>  }
> 
> @@ -164,6 +166,7 @@ static int plic_set_affinity(struct irq_data *d,
>  			     const struct cpumask *mask_val, bool force)  {
>  	struct plic_priv *priv = irq_data_get_irq_chip_data(d);
> +	struct plic_handler *handler;
>  	struct cpumask new_mask;
> 
>  	cpumask_and(&new_mask, mask_val, &priv->lmask); @@ -180,6
> +183,9 @@ static int plic_set_affinity(struct irq_data *d,
>  	if (!irqd_irq_disabled(d))
>  		plic_irq_enable(d);
> 
> +	handler = per_cpu_ptr(&plic_handlers,
> cpumask_first(&new_mask));
> +	priv->complete[d->hwirq] = handler->hart_base + CONTEXT_CLAIM;
> +
>  	return IRQ_SET_MASK_OK_DONE;
>  }
>  #endif
> @@ -516,6 +522,10 @@ static int plic_probe(struct platform_device *pdev)
>  	priv->prio_save = devm_bitmap_zalloc(dev, nr_irqs, GFP_KERNEL);
>  	if (!priv->prio_save)
>  		return -ENOMEM;
> +
> +	priv->complete = devm_kcalloc(dev, 1 + nr_irqs, sizeof(*priv-
> >complete), GFP_KERNEL);
> +	if (!priv->complete)
> +		return -ENOMEM;
> 
>  	for (i = 0; i < nr_contexts; i++) {
>  		error = plic_parse_context_parent(pdev, i, &parent_hwirq,
> &cpu); @@ -577,6 +587,12 @@ static int plic_probe(struct platform_device
> *pdev)
>  			writel(1, priv->regs + PRIORITY_BASE +
>  				  hwirq * PRIORITY_PER_ID);
>  		}
> +
> +		if (!nr_handlers) {
> +			for (hwirq = 1; hwirq <= nr_irqs; hwirq++)
> +				priv->complete[hwirq] = handler->hart_base
> + CONTEXT_CLAIM;
> +		}
> +
>  		nr_handlers++;
>  	}
> 


More information about the linux-riscv mailing list