[PATCH v2 03/10] riscv: Implement cmpxchg8/16() using Zabha
Alexandre Ghiti
alex at ghiti.fr
Thu Jul 4 09:36:26 PDT 2024
On 27/06/2024 13:53, Andrea Parri wrote:
>> -#define __arch_cmpxchg_masked(sc_sfx, prepend, append, r, p, o, n) \
>> +#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, prepend, append, r, p, o, n) \
>> ({ \
>> + __label__ no_zacas, zabha, end; \
>> + \
>> + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA)) { \
>> + asm goto(ALTERNATIVE("j %[no_zacas]", "nop", 0, \
>> + RISCV_ISA_EXT_ZACAS, 1) \
>> + : : : : no_zacas); \
>> + asm goto(ALTERNATIVE("nop", "j %[zabha]", 0, \
>> + RISCV_ISA_EXT_ZABHA, 1) \
>> + : : : : zabha); \
>> + } \
>> + \
>> +no_zacas:; \
>> u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
>> ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
>> ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
>> @@ -133,6 +145,19 @@
>> : "memory"); \
>> \
>> r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
>> + goto end; \
>> + \
>> +zabha: \
>> + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA)) { \
>> + __asm__ __volatile__ ( \
>> + prepend \
>> + " amocas" cas_sfx " %0, %z2, %1\n" \
>> + append \
>> + : "+&r" (r), "+A" (*(p)) \
>> + : "rJ" (n) \
>> + : "memory"); \
>> + } \
>> +end:; \
>> })
> I admit that I found this all quite difficult to read; IIUC, this is
> missing an IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) check.
I'm not sure we need the zacas check here, since we could use a
toolchain that supports zabha but not zacas, run this on a zabha/zacas
platform and it would work.
> How about adding
> such a check under the zabha: label (replacing/in place of the second
> IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) check) and moving the corresponding
> asm goto statement there, perhaps as follows? (on top of this patch)
If that makes things clearer for you, sure, I can do that.
>
> Also, the patch presents the first occurrence of RISCV_ISA_EXT_ZABHA;
> perhaps worth moving the hwcap/cpufeature changes from patch #6 here?
>
> Andrea
>
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index b9a3fdcec919..3c913afec150 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -110,15 +110,12 @@
> __label__ no_zacas, zabha, end; \
> \
> if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA)) { \
> - asm goto(ALTERNATIVE("j %[no_zacas]", "nop", 0, \
> - RISCV_ISA_EXT_ZACAS, 1) \
> - : : : : no_zacas); \
> asm goto(ALTERNATIVE("nop", "j %[zabha]", 0, \
> RISCV_ISA_EXT_ZABHA, 1) \
> : : : : zabha); \
> } \
> \
> -no_zacas:; \
> +no_zacas: \
> u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
> ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
> ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
> @@ -148,16 +145,20 @@ no_zacas:; \
> goto end; \
> \
> zabha: \
> - if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA)) { \
But we need to keep this check, otherwise it would fail to compile on
toolchains without Zabha support.
> - __asm__ __volatile__ ( \
> - prepend \
> - " amocas" cas_sfx " %0, %z2, %1\n" \
> - append \
> - : "+&r" (r), "+A" (*(p)) \
> - : "rJ" (n) \
> - : "memory"); \
> + if (IS_ENABLED(CONFIG_RISCV_ISA_ZAZAS)) { \
> + asm goto(ALTERNATIVE("j %[no_zacas]", "nop", 0, \
> + RISCV_ISA_EXT_ZACAS, 1) \
> + : : : : no_zacas); \
> } \
> -end:; \
> + \
> + __asm__ __volatile__ ( \
> + prepend \
> + " amocas" cas_sfx " %0, %z2, %1\n" \
> + append \
> + : "+&r" (r), "+A" (*(p)) \
> + : "rJ" (n) \
> + : "memory"); \
> +end: \
> })
>
> #define __arch_cmpxchg(lr_sfx, sc_cas_sfx, prepend, append, r, p, co, o, n) \
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
More information about the linux-riscv
mailing list