[PATCH] lib: fix atomic_add_return

Anup Patel Anup.Patel at wdc.com
Tue Apr 6 06:55:41 BST 2021



> -----Original Message-----
> From: Xiang W <wxjstz at 126.com>
> Sent: 06 April 2021 09:05
> To: opensbi at lists.infradead.org
> Cc: Anup Patel <Anup.Patel at wdc.com>; Xiang W <wxjstz at 126.com>
> Subject: [PATCH] lib: fix atomic_add_return
> 
> unsigned length may be 4 bytes or 8 bytes, amoadd.w only applies to 4 bytes,
> so submit this patch
> 
> Signed-off-by: Xiang W <wxjstz at 126.com>

Good catch. This patch looks good to me.

Reviewed-by: Anup Patel <anup.patel at wdc.com>

Regards,
Anup

> ---
>  lib/sbi/riscv_atomic.c | 18 ++++++++----------
>  1 file changed, 8 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/sbi/riscv_atomic.c b/lib/sbi/riscv_atomic.c index
> 558bca8..528686f 100644
> --- a/lib/sbi/riscv_atomic.c
> +++ b/lib/sbi/riscv_atomic.c
> @@ -28,25 +28,23 @@ void atomic_write(atomic_t *atom, long value)  long
> atomic_add_return(atomic_t *atom, long value)  {
>  	long ret;
> -
> +#if __SIZEOF_LONG__ == 4
>  	__asm__ __volatile__("	amoadd.w.aqrl  %1, %2, %0"
>  			     : "+A"(atom->counter), "=r"(ret)
>  			     : "r"(value)
>  			     : "memory");
> -
> +#elif __SIZEOF_LONG__ == 8
> +	__asm__ __volatile__("	amoadd.d.aqrl  %1, %2, %0"
> +			     : "+A"(atom->counter), "=r"(ret)
> +			     : "r"(value)
> +			     : "memory");
> +#endif
>  	return ret + value;
>  }
> 
>  long atomic_sub_return(atomic_t *atom, long value)  {
> -	long ret;
> -
> -	__asm__ __volatile__("	amoadd.w.aqrl  %1, %2, %0"
> -			     : "+A"(atom->counter), "=r"(ret)
> -			     : "r"(-value)
> -			     : "memory");
> -
> -	return ret - value;
> +	return atomic_add_return(atom, -value);
>  }
> 
>  #define __axchg(ptr, new, size)
> 		\
> --
> 2.20.1




More information about the opensbi mailing list