[PATCH v5 1/3] x86/mm/tlb: Make enter_lazy_tlb() always inline on x86

Thomas Gleixner tglx at linutronix.de
Mon Dec 15 07:42:13 PST 2025


On Mon, Dec 15 2025 at 03:09, Xie Yuanbin wrote:
> enter_lazy_tlb() on x86 is short enough, and is called in context
> switching, which is the hot code path.
>
> Make enter_lazy_tlb() always inline on x86 to optimize performance.
>
> Signed-off-by: Xie Yuanbin <qq570070308 at gmail.com>
> Reviewed-by: Rik van Riel <riel at surriel.com>
> Reported-by: kernel test robot <lkp at intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202511091959.kfmo9kPB-lkp@intel.com/
> Closes: https://lore.kernel.org/oe-kbuild-all/202511092219.73aMMES4-lkp@intel.com/
> Closes: https://lore.kernel.org/oe-kbuild-all/202511100042.ZklpqjOY-lkp@intel.com/

These Reported-by and Closes tags are just wrong. This is a new patch
and the robot reported failures against earlier versions. The robot
report is very clear about that:

  "If you fix the issue in a separate patch/commit (i.e. not just a new version of
   the same patch/commit), kindly add following tags
     Reported-by:...
     Closes:..."

No?

> +/*
> + * Please ignore the name of this function.  It should be called
> + * switch_to_kernel_thread().

And why is it not renamed then?

> + *
> + * enter_lazy_tlb() is a hint from the scheduler that we are entering a

We enter a kernel thread? AFAIK the metaverse has been canceled.

> + * kernel thread or other context without an mm.  Acceptable implementations
> + * include doing nothing whatsoever, switching to init_mm, or various clever
> + * lazy tricks to try to minimize TLB flushes.
> + *
> + * The scheduler reserves the right to call enter_lazy_tlb() several times
> + * in a row.  It will notify us that we're going back to a real mm by

It will notify us by sending email or what?

> + * calling switch_mm_irqs_off().
> + */
>  #define enter_lazy_tlb enter_lazy_tlb
> -extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk);
> +#ifndef MODULE
> +static __always_inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
> +{
> +	if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
> +		return;
> +
> +	this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
> +}

Please move the '#define enter_....' under the inline function. That's
way simpler to read.

Thanks,

        tglx



More information about the linux-riscv mailing list