[PATCH v2] arm64: kernel: implement fast refcount checking

Li Kun hw.likun at huawei.com
Tue Jul 25 21:11:52 PDT 2017


Hi Ard,


on 2017/7/26 2:15, Ard Biesheuvel wrote:
> +#define REFCOUNT_OP(op, asm_op, cond, l, clobber...)			\
> +__LL_SC_INLINE int							\
> +__LL_SC_PREFIX(__refcount_##op(int i, atomic_t *r))			\
> +{									\
> +	unsigned long tmp;						\
> +	int result;							\
> +									\
> +	asm volatile("// refcount_" #op "\n"				\
> +"	prfm		pstl1strm, %2\n"				\
> +"1:	ldxr		%w0, %2\n"					\
> +"	" #asm_op "	%w0, %w0, %w[i]\n"				\
> +"	st" #l "xr	%w1, %w0, %2\n"					\
> +"	cbnz		%w1, 1b\n"					\
> +	REFCOUNT_CHECK(cond)						\
> +	: "=&r" (result), "=&r" (tmp), "+Q" (r->counter)		\
> +	: REFCOUNT_INPUTS(r) [i] "Ir" (i)				\
> +	clobber);							\
> +									\
> +	return result;							\
> +}									\
> +__LL_SC_EXPORT(__refcount_##op);
> +
> +REFCOUNT_OP(add_lt, adds, mi,  , REFCOUNT_CLOBBERS);
> +REFCOUNT_OP(sub_lt_neg, adds, mi, l, REFCOUNT_CLOBBERS);
> +REFCOUNT_OP(sub_le_neg, adds, ls, l, REFCOUNT_CLOBBERS);
> +REFCOUNT_OP(sub_lt, subs, mi, l, REFCOUNT_CLOBBERS);
> +REFCOUNT_OP(sub_le, subs, ls, l, REFCOUNT_CLOBBERS);
> +
I'm not quite sure if we use b.lt to judge whether the result of adds is 
less than zero is correct or not.
The b.lt means N!=V, take an extreme example, if we operate like below, 
the b.lt will also be true.

refcount_set(&ref_c,0x80000000);
refcount_dec_and_test(&ref_c);

maybe we should use PL/NE/MI/EQ to judge the LT_ZERO or LE_ZERO condition ?

-- 
Best Regards
Li Kun




More information about the linux-arm-kernel mailing list