[PATCH] arm64: Implement support for read-mostly sections
Jungseok Lee
jungseoklee85 at gmail.com
Tue Dec 2 09:35:49 PST 2014
On Dec 2, 2014, at 8:42 PM, Catalin Marinas wrote:
> On Mon, Dec 01, 2014 at 10:01:06PM +0000, Jungseok Lee wrote:
>> As putting data which is read mostly together, we can avoid
>> unnecessary cache line bouncing.
>>
>> Other architectures, such as ARM and x86, adopted the same idea.
>>
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Will Deacon <will.deacon at arm.com>
>> Signed-off-by: Jungseok Lee <jungseoklee85 at gmail.com>
>
> It looks fine to me, with a nitpick below:
>
>> ---
>> arch/arm64/include/asm/cache.h | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
>> index 88cc05b..c1a2a9f 100644
>> --- a/arch/arm64/include/asm/cache.h
>> +++ b/arch/arm64/include/asm/cache.h
>> @@ -30,6 +30,8 @@
>> */
>> #define ARCH_DMA_MINALIGN L1_CACHE_BYTES
>>
>> +#define __read_mostly __attribute__((__section__(".data..read_mostly")))
>> +
>> #ifndef __ASSEMBLY__
>
> I think we can move this #define below #ifndef as it doesn't make sense
> in .S files anyway.
Okay, I will move it below #ifndef.
> Acked-by: Catalin Marinas <catalin.marinas at arm.com>
Thanks!
Jungseok Lee
More information about the linux-arm-kernel
mailing list