[PATCH 1/2] arm64/entry: Fix involuntary preemption exception masking

Thomas Gleixner tglx at kernel.org
Wed Mar 25 08:46:01 PDT 2026


On Wed, Mar 25 2026 at 11:03, Mark Rutland wrote:
> On Sun, Mar 22, 2026 at 12:25:06AM +0100, Thomas Gleixner wrote:
>> The current sequence on entry is:
>> 
>>   // interrupts are disabled by interrupt/exception entry
>>   enter_from_kernel_mode()
>>      irqentry_enter(regs);
>>      mte_check_tfsr_entry();
>>      mte_disable_tco_entry();
>>      daif_inherit(regs);
>>      // interrupts are still disabled
>
> That last comment isn't quite right: we CAN and WILL enable interrupts
> in local_daif_inherit(), if and only if they were enabled in the context
> the exception was taken from.

Ok.

> As mentioned above, when handling an interrupt (rather than a
> synchronous exception), we don't use local_daif_inherit(), and instead
> use a different DAIF function to unmask everything except interrupts.
>
>> which then becomes:
>> 
>>   // interrupts are disabled by interrupt/exception entry
>>   irqentry_enter(regs)
>>      establish_state();
>>      // RCU is watching
>>      arch_irqentry_enter_rcu()
>>         mte_check_tfsr_entry();
>>         mte_disable_tco_entry();
>>         daif_inherit(regs);
>>      // interrupts are still disabled
>>           
>> Which is equivalent versus the MTE/DAIF requirements, no?
>
> As above, we can't use local_daif_inherit() here because we want
> different DAIF masking behavior for entry to interrupts and entry to
> synchronous exceptions. While we could pass some token around to
> determine the behaviour dynamically, that's less clear, more
> complicated, and results in worse code being generated for something we
> know at compile time.

I get it. Duh what a maze.

> If we can leave DAIF masked early on during irqentry_enter(), I don't
> see why we can't leave all DAIF exceptions masked until the end of
> irqentry_enter().

Yes. Entry is not an issue.

> I *think* what would work for us is we could split some of the exit
> handling (including involuntary preemption) into a "prepare" step, as we
> have for return to userspace. That way, arm64 could handle exiting
> something like:
>
> 	local_irq_disable();
> 	irqentry_exit_prepare(); // new, all generic logic
> 	local_daif_mask();
> 	arm64_exit_to_kernel_mode() {
> 		...
> 		irqentry_exit(); // ideally irqentry_exit_to_kernel_mode().
> 		...
> 	}
>
> ... and other architectures can use a combined exit_to_kernel_mode() (or
> whatever we call that), which does both, e.g.
>
> 	// either noinstr, __always_inline, or a macro
> 	void irqentry_prepare_and_exit(void)

That's a bad idea as that would require to do a full kernel rename of
all existing irqentry_exit() users.

> 	{
> 		irqentry_exit_prepare();
> 		irqentry_exit();
> 	}

Aside of the naming that should work.

Thanks,

        tglx







More information about the linux-arm-kernel mailing list