[RFC 4/6] sched: secure access to other CPU statistics
Vincent Guittot
vincent.guittot at linaro.org
Mon Oct 29 09:18:42 EDT 2012
On 24 October 2012 17:21, Santosh Shilimkar <santosh.shilimkar at ti.com> wrote:
> $subject is bit confusing here.
>
>
> On Sunday 07 October 2012 01:13 PM, Vincent Guittot wrote:
>>
>> The atomic update of runnable_avg_sum and runnable_avg_period are ensured
>> by their size and the toolchain. But we must ensure to not read an old
>> value
>> for one field and a newly updated value for the other field. As we don't
>> want to lock other CPU while reading these fields, we read twice each
>> fields
>> and check that no change have occured in the middle.
>>
>> Signed-off-by: Vincent Guittot <vincent.guittot at linaro.org>
>> ---
>> kernel/sched/fair.c | 19 +++++++++++++++++--
>> 1 file changed, 17 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 8c9d3ed..6df53b5 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3133,13 +3133,28 @@ static int select_idle_sibling(struct task_struct
>> *p, int target)
>> static inline bool is_buddy_busy(int cpu)
>> {
>> struct rq *rq = cpu_rq(cpu);
>> + volatile u32 *psum = &rq->avg.runnable_avg_sum;
>> + volatile u32 *pperiod = &rq->avg.runnable_avg_period;
>> + u32 sum, new_sum, period, new_period;
>> + int timeout = 10;
>
> So it can be 2 times read or more as well.
>
>> +
>> + while (timeout) {
>> + sum = *psum;
>> + period = *pperiod;
>> + new_sum = *psum;
>> + new_period = *pperiod;
>> +
>> + if ((sum == new_sum) && (period == new_period))
>> + break;
>> +
>> + timeout--;
>> + }
>>
> Seems like you did notice incorrect pair getting read
> for rq runnable_avg_sum and runnable_avg_period. Seems
> like the fix is to update them together under some lock
> to avoid such issues.
My goal is to have a lock free mechanism because I don't want to lock
another CPU while reading its statistic
>
> Regards
> Santosh
>
More information about the linux-arm-kernel
mailing list