[PATCH v6 7/7] kselftest/arm64: Add HWCAP test for FEAT_{LS64, LS64_V}

Zhou Wang wangzhou1 at hisilicon.com
Sun Oct 26 19:50:50 PDT 2025


On 2025/10/25 18:06, Zhou Wang wrote:
> On 2025/10/25 0:18, Arnd Bergmann wrote:
>> On Fri, Oct 24, 2025, at 11:08, Zhou Wang wrote:
>>
>>> +static void ls64_sigill(void)
>>> +{
>>> +	struct sigaction ign, old;
>>> +	char src[64] __aligned(64) = { 1 };
>>> +
>>> +	/*
>>> +	 * LS64, LS64_V require target memory to be Device/Non-cacheable (if
>>> +	 * FEAT_LS64WB not supported) and the completer supports these
>>> +	 * instructions, otherwise we'll receive a SIGBUS. Since we are only
>>> +	 * testing the ABI here, so just ignore the SIGBUS and see if we can
>>> +	 * execute the instructions without receiving a SIGILL. Restore the
>>> +	 * handler of SIGBUS after this test.
>>> +	 */
>>> +	ign.sa_sigaction = ignore_signal;
>>> +	ign.sa_flags = SA_SIGINFO | SA_RESTART;
>>> +	sigemptyset(&ign.sa_mask);
>>> +	sigaction(SIGBUS, &ign, &old);
>>> +
>>> +	register void *xn asm ("x8") = src;
>>> +	register u64 xt_1 asm ("x0");
>>> +	register u64 __maybe_unused xt_2 asm ("x1");
>>> +	register u64 __maybe_unused xt_3 asm ("x2");
>>> +	register u64 __maybe_unused xt_4 asm ("x3");
>>> +	register u64 __maybe_unused xt_5 asm ("x4");
>>> +	register u64 __maybe_unused xt_6 asm ("x5");
>>> +	register u64 __maybe_unused xt_7 asm ("x6");
>>> +	register u64 __maybe_unused xt_8 asm ("x7");
>>> +
>>> +	/* LD64B x0, [x8] */
>>> +	asm volatile(".inst 0xf83fd100" : "=r" (xt_1) : "r" (xn));
>>
>> Relying on the __maybe_unused register declaration seems a little
>> fragile, can you change this so that the inline asm specifies
>> all of the registers correctly as input/output arguments?
> 
> Seems we can remove xt_2 ... xt8, but add x1 ... x7 to asm clobber list,
> something like:
> 
>     asm volatile(".inst 0xf83fd100" : "=r" (xt_1) : "r" (xn)
>                  : "x1", "x2", "x3", "x4", "x5", "x6", "x7");
> 
>>> +static void ls64_v_sigill(void)
>>> +{
>>> +	struct sigaction ign, old;
>>> +	char dst[64] __aligned(64);
>>> +
>>> +	/* See comment in ls64_sigill() */
>>> +	ign.sa_sigaction = ignore_signal;
>>> +	ign.sa_flags = SA_SIGINFO | SA_RESTART;
>>> +	sigemptyset(&ign.sa_mask);
>>> +	sigaction(SIGBUS, &ign, &old);
>>> +
>>> +	register void *xn asm ("x8") = dst;
>>> +	register u64 xt_1 asm ("x0") = 1;
>>> +	register u64 __maybe_unused xt_2 asm ("x1") = 2;
>>> +	register u64 __maybe_unused xt_3 asm ("x2") = 3;
>>> +	register u64 __maybe_unused xt_4 asm ("x3") = 4;
>>> +	register u64 __maybe_unused xt_5 asm ("x4") = 5;
>>> +	register u64 __maybe_unused xt_6 asm ("x5") = 6;
>>> +	register u64 __maybe_unused xt_7 asm ("x6") = 7;
>>> +	register u64 __maybe_unused xt_8 asm ("x7") = 8;
>>> +	register u64 st   asm ("x9");
>>> +
>>> +	/* ST64BV x9, x0, [x8] */
>>> +	asm volatile(".inst 0xf829b100" : "=r" (st) : "r" (xt_1), "r" (xn));
>>> +
>>> +	sigaction(SIGBUS, &old, NULL);
>>
>> Is ST64BV expected to cause SIGBUS here, or should it return the
>> 0xffffffffffffffff output to indicate an unsupported memory area?
> 
> I think it should return 0xffffffffffffffff without an exception,

My understanding above is wrong.

As mentioned in C3.2.6 of ARM spec L.b:

1. "When the instructions access a memory type that is not one of the following,
   a data abort for unsupported Exclusive or atomic access is generated"

2. "If the target memory location does not support the ST64BV or ST64BV0
   instructions, then the register specified by <Xs> is set to 0xFFFFFFFF_FFFFFFFF"

Here the test code is the first case, so a fault should be triggered.

Best,
Zhou

> will modify above test codes in next version.
> 
> Best,
> Zhou
> 
>>
>>      Arnd
>> .
> .



More information about the linux-arm-kernel mailing list