[PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Namhyung Kim
namhyung at kernel.org
Tue Jan 29 06:12:37 EST 2013
On Thu, 24 Jan 2013 10:00:04 +0530, Srivatsa S. Bhat wrote:
> On 01/24/2013 01:27 AM, Tejun Heo wrote:
>> On Thu, Jan 24, 2013 at 01:03:52AM +0530, Srivatsa S. Bhat wrote:
>>> CPU 0 CPU 1
>>>
>>> read_lock(&rwlock)
>>>
>>> write_lock(&rwlock) //spins, because CPU 0
>>> //has acquired the lock for read
>>>
>>> read_lock(&rwlock)
>>> ^^^^^
>>> What happens here? Does CPU 0 start spinning (and hence deadlock) or will
>>> it continue realizing that it already holds the rwlock for read?
>>
>> I don't think rwlock allows nesting write lock inside read lock.
>> read_lock(); write_lock() will always deadlock.
>>
>
> Sure, I understand that :-) My question was, what happens when *two* CPUs
> are involved, as in, the read_lock() is invoked only on CPU 0 whereas the
> write_lock() is invoked on CPU 1.
>
> For example, the same scenario shown above, but with slightly different
> timing, will NOT result in a deadlock:
>
> Scenario 2:
> CPU 0 CPU 1
>
> read_lock(&rwlock)
>
>
> read_lock(&rwlock) //doesn't spin
>
> write_lock(&rwlock) //spins, because CPU 0
> //has acquired the lock for read
>
>
> So I was wondering whether the "fairness" logic of rwlocks would cause
> the second read_lock() to spin (in the first scenario shown above) because
> a writer is already waiting (and hence new readers should spin) and thus
> cause a deadlock.
In my understanding, current x86 rwlock does basically this (of course,
in an atomic fashion):
#define RW_LOCK_BIAS 0x10000
rwlock_init(rwlock)
{
rwlock->lock = RW_LOCK_BIAS;
}
arch_read_lock(rwlock)
{
retry:
if (--rwlock->lock >= 0)
return;
rwlock->lock++;
while (rwlock->lock < 1)
continue;
goto retry;
}
arch_write_lock(rwlock)
{
retry:
if ((rwlock->lock -= RW_LOCK_BIAS) == 0)
return;
rwlock->lock += RW_LOCK_BIAS;
while (rwlock->lock != RW_LOCK_BIAS)
continue;
goto retry;
}
So I can't find where the 'fairness' logic comes from..
Thanks,
Namhyung
More information about the linux-arm-kernel
mailing list