Question WRT early IRQ/NMI entry code
Mark Rutland
mark.rutland at arm.com
Tue Nov 30 04:50:30 PST 2021
On Tue, Nov 30, 2021 at 12:28:41PM +0100, Nicolas Saenz Julienne wrote:
> Hi All,
Hi Nicolas,
> while going over the IRQ/NMI entry code I've found a small 'inconsistency':
> while in the IRQ entry path, we inform RCU of the context change *before*
> incrementing the preempt counter, the opposite happens for the NMI entry
> path. This applies to both arm64 and x86[1].
For arm64, the style was copied from the x86 code, and (AFAIK) I had no
particular reason for following either order other than consistency with x86.
> Actually, rcu_nmi_enter() — which is also the main RCU context switch function
> for the IRQ entry path — uses the preempt counter to verify it's not in NMI
> context. So it would make sense to assume all callers have the same updated
> view of the preempt count, which isn't true ATM.
I agree consistency would be nice, assuming there's no issue preventing us from
moving the IRQ preempt_count logic earlier.
It sounds like today the ordering is only *required* when entering an NMI, and
we already do the right thing there. Do you see a case where something would go
wrong (or would behave differently with the flipped ordering) for IRQ today?
> I'm sure there an obscure/non-obvious reason for this, right?
TBH I suspect this is mostly oversight / legacy, and likely something we can
tighten up.
Thanks,
Mark.
>
> Thanks!
> Nicolas
>
> [1]
> IRQ path:
> -> x86_64 asm (entry_64.S)
> -> irqentry_enter() -> rcu_irq_enter() -> *rcu_nmi_enter()*
> -> run_irq_on_irqstack_cond() -> irq_exit_rcu() -> *preempt_count_add(HARDIRQ_OFFSET)*
> -> // Run IRQ...
>
> NMI path:
> -> x86_64 asm (entry_64.S)
> -> irqentry_nmi_enter() -> __nmi_enter() -> *__preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET)*
> -> *rcu_nmi_enter()*
>
> For arm64, see 'arch/arm64/kernel/entry-common.c'.
>
More information about the linux-arm-kernel
mailing list