[PATCH v3] arm64: kernel: implement fast refcount checking

Ard Biesheuvel ard.biesheuvel at linaro.org
Mon Jul 31 06:59:32 PDT 2017


On 31 July 2017 at 14:17, Li Kun <hw.likun at huawei.com> wrote:
>
>
> 在 2017/7/31 20:01, Ard Biesheuvel 写道:
>>
>> +static __always_inline __must_check bool refcount_add_not_zero(unsigned
>> int i,
>> +                                                              refcount_t
>> *r)
>> +{
>> +       unsigned long tmp;
>> +       int result;
>> +
>> +       asm volatile("// refcount_add_not_zero \n"
>> +"      prfm            pstl1strm, %2\n"
>> +"1:    ldxr            %w[val], %2\n"
>> +"      cbz             %w[val], 2f\n"
>> +"      adds            %w[val], %w[val], %w[i]\n"
>> +"      stxr            %w1, %w[val], %2\n"
>> +"      cbnz            %w1, 1b\n"
>> +       REFCOUNT_POST_CHECK_NEG
>> +"2:"
>> +       : [val] "=&r" (result), "=&r" (tmp), "+Q" (r->refs.counter)
>> +       : REFCOUNT_INPUTS(&r->refs) [i] "Ir" (i)
>> +       : REFCOUNT_CLOBBERS);
>> +
>> +       return result != 0;
>> +}
>> +
>
> Could we use "cas" here instead of ll/sc ?
>

I don't see how, to be honest.

Compare and swap only performs the store if the expected value is in
the memory location. In this case, we don't know the old value, we
only know we need to do something special if it is 0.



More information about the linux-arm-kernel mailing list