[RFC] arm/mm: add a protection when flushing anonymous pages
Andrew Yan-Pai Chen
yanpai.chen at gmail.com
Tue Oct 9 02:01:54 EDT 2012
On Fri, Oct 5, 2012 at 6:09 PM, Andrew Yan-Pai Chen
<yanpai.chen at gmail.com> wrote:
> On Fri, Oct 5, 2012 at 5:10 PM, Russell King - ARM Linux
> <linux at arm.linux.org.uk> wrote:
>> On Tue, Oct 02, 2012 at 03:07:33PM +0800, Andrew Yan-Pai Chen wrote:
>>> [RFC PATCH] make flush_pfn_alias() nonpreemptible
>>>
>>> Since flush_pfn_alias() is preemptible, it is possible to be
>>> preempted just after set_top_pte() is done. If the process
>>> which preempts the previous happened to invoke flush_pfn_alias()
>>> with the same colour vaddr as that of the previous, the same
>>> top pte will be overwritten. When switching back to the previous,
>>> it attempts to flush cache lines with incorrect mapping. Then
>>> no lines (or wrong lines) will be flushed because of the nature
>>> of vipt caches.
>>>
>>> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
>>> index 40ca11e..bd07918 100644
>>> --- a/arch/arm/mm/flush.c
>>> +++ b/arch/arm/mm/flush.c
>>> @@ -27,6 +27,8 @@ static void flush_pfn_alias(unsigned long pfn,
>>> unsigned long vaddr)
>>> unsigned long to = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) <<
>>> PAGE_SHIFT);
>>> const int zero = 0;
>>>
>>> + preempt_disable();
>>> +
>>> set_top_pte(to, pfn_pte(pfn, PAGE_KERNEL));
>>>
>>> asm( "mcrr p15, 0, %1, %0, c14\n"
>>> @@ -34,6 +36,8 @@ static void flush_pfn_alias(unsigned long pfn,
>>> unsigned long vaddr)
>>> :
>>> : "r" (to), "r" (to + PAGE_SIZE - L1_CACHE_BYTES), "r" (zero)
>>> : "cc");
>>> +
>>> + preempt_enable();
>>> }
>>
>> This looks like it's quite correct, but if you look at the file a little
>> deeper, you'll notice that flush_icache_alias() potentially suffers the
>> same issue.
>>
>> They should probably both also have a comment indicating that they're not
>> to be called for SMP setups (because there's no locking for that.)
>
> OK. I will resend the patch later.
> Thanks!
>
> --
> Regards,
> Andrew
Hi Russell,
Is flush_icache_alias() not to be called for SMP setups?
It seems that processors with a vipt non-aliasing D-cache but a vipt
aliasing I-cache
(such like cortex-a9) will call this function.
So perhaps we should have a lock in this case?
--
Regards,
Andrew
More information about the linux-arm-kernel
mailing list