[PATCH] arm64: Force the use of CNTVCT_EL0 in __delay()
Ben Horgan
ben.horgan at arm.com
Mon Feb 23 07:14:04 PST 2026
Hi Marc,
On 2/23/26 14:31, Marc Zyngier wrote:
> Hi Ben,
>
> On Mon, 23 Feb 2026 11:16:32 +0000,
> Ben Horgan <ben.horgan at arm.com> wrote:
>>
>> Hi Marc,
>>
>> On Fri, Feb 13, 2026 at 02:16:19PM +0000, Marc Zyngier wrote:
>>> Quentin forwards a report from Hyesoo Yu, describing an interesting
>>> problem with the use of WFxT in __delay() when a vcpu is loaded and
>>> that KVM is *not* in VHE mode (either nVHE or hVHE).
>>>
>>> In this case, CNTVOFF_EL2 is set to a non-zero value to reflect the
>>> state of the guest virtual counter. At the same time, __delay() is
>>> using get_cycles() to read the counter value, which is indirected to
>>> reading CNTPCT_EL0.
>>>
>>> The core of the issue is that WFxT is using the *virtual* counter,
>>> while the kernel is using the physical counter, and that the offset
>>> introduces a really bad discrepancy between the two.
>>>
>>> Fix this by forcing the use of CNTVCT_EL0, making __delay() consistent
>>> irrespective of the value of CNTVOFF_EL2.
>>>
>>> Reported-by: Hyesoo Yu <hyesoo.yu at samsung.com>
>>> Reported-by: Quentin Perret <qperret at google.com>
>>> Reviewed-by: Quentin Perret <qperret at google.com>
>>> Fixes: 7d26b0516a0df ("arm64: Use WFxT for __delay() when possible")
>>> Signed-off-by: Marc Zyngier <maz at kernel.org>
>>> Link: https://lore.kernel.org/r/ktosachvft2cgqd5qkukn275ugmhy6xrhxur4zqpdxlfr3qh5h@o3zrfnsq63od
>>> Cc: stable at vger.kernel.org
>>> ---
>>> arch/arm64/lib/delay.c | 19 +++++++++++++++----
>>> 1 file changed, 15 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c
>>> index cb2062e7e2340..d02341303899e 100644
>>> --- a/arch/arm64/lib/delay.c
>>> +++ b/arch/arm64/lib/delay.c
>>> @@ -23,9 +23,20 @@ static inline unsigned long xloops_to_cycles(unsigned long xloops)
>>> return (xloops * loops_per_jiffy * HZ) >> 32;
>>> }
>>>
>>> +/*
>>> + * Force the use of CNTVCT_EL0 in order to have the same base as WFxT.
>>> + * This avoids some annoying issues when CNTVOFF_EL2 is not reset 0 on a
>>> + * KVM host running at EL1 until we do a vcpu_put() on the vcpu. When
>>> + * running at EL2, the effective offset is always 0.
>>> + *
>>> + * Note that userspace cannot change the offset behind our back either,
>>> + * as the vcpu mutex is held as long as KVM_RUN is in progress.
>>> + */
>>> +#define __delay_cycles() __arch_counter_get_cntvct_stable()
>>
>> I'm seeing this CONFIG_DEBUG_PREEMPT warning, see below, when running 7.0-rc1 on
>> FVP Base RevC. I haven't tried bisecting but it looks to be introduced by this
>> change.
>>
>> The calls are:
>>
>> __this_cpu_read()
>> erratum_handler()
>> arch_timer_reg_read_stable()
>> __arch_counter_get_cntvct_stable()
>> __delay()
>>
>> This silences the warning:
>>
>> diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
>> index f5794d50f51d..f07e4efa0d2b 100644
>> --- a/arch/arm64/include/asm/arch_timer.h
>> +++ b/arch/arm64/include/asm/arch_timer.h
>> @@ -24,14 +24,14 @@
>> #define has_erratum_handler(h) \
>> ({ \
>> const struct arch_timer_erratum_workaround *__wa; \
>> - __wa = __this_cpu_read(timer_unstable_counter_workaround); \
>> + __wa = raw_cpu_read(timer_unstable_counter_workaround); \
>> (__wa && __wa->h); \
>> })
>>
>> #define erratum_handler(h) \
>> ({ \
>> const struct arch_timer_erratum_workaround *__wa; \
>> - __wa = __this_cpu_read(timer_unstable_counter_workaround); \
>> + __wa = raw_cpu_read(timer_unstable_counter_workaround); \
>> (__wa && __wa->h) ? ({ isb(); __wa->h;}) : arch_timer_##h; \
>> })
>
> It does indeed silence it, but that's IMO the wrong thing to do since
> you can end-up calling a workaround helper on the wrong CPU if
Agreed. This just hides the problem.
> preempted. If you look at how things were done before this patch, we
> had:
>
> get_cycles() -> arch_timer_read_counter() -> arch_counter_get_cntvct_stable()
>
> Crucially, arch_counter_get_cntvct_stable() does disable preemption,
> and we should preserve it. Something like this:
>
> diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c
> index d02341303899e..25fb593f95b0c 100644
> --- a/arch/arm64/lib/delay.c
> +++ b/arch/arm64/lib/delay.c
> @@ -32,7 +32,16 @@ static inline unsigned long xloops_to_cycles(unsigned long xloops)
> * Note that userspace cannot change the offset behind our back either,
> * as the vcpu mutex is held as long as KVM_RUN is in progress.
> */
> -#define __delay_cycles() __arch_counter_get_cntvct_stable()
> +static cycles_t __delay_cycles(void)
> +{
> + cycles_t val;
> +
> + preempt_disable();
> + val = __arch_counter_get_cntvct_stable();
> + preenpt_enable();
> +
> + return val;
> +}
Modulo the typo (preenpt) this looks to be correct and I see no warnings.
>
> void __delay(unsigned long cycles)
> {
>
> The question is whether there is a material benefit in replicating the
> arch_timer_read_counter() indirection for the virtual counter in order
> to not pay the price of preempt_disable() when we're on a non-broken
> system (hopefully the vast majority of implementations).
I'm unsure of the tradeoffs here.
>
> Thanks,
>
> M.
>
Thanks,
Ben
More information about the linux-arm-kernel
mailing list