[PATCH v2] arm64: Implement support for read-mostly sections
Jungseok Lee
jungseoklee85 at gmail.com
Tue Dec 2 09:49:24 PST 2014
As putting data which is read mostly together, we can avoid
unnecessary cache line bouncing.
Other architectures, such as ARM and x86, adopted the same idea.
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Will Deacon <will.deacon at arm.com>
Acked-by: Catalin Marinas <catalin.marinas at arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85 at gmail.com>
---
Changes since v1:
- move __read_mostly macro below #ifndef __ASSEMBLY__
arch/arm64/include/asm/cache.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 88cc05b..bde4499 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -32,6 +32,8 @@
#ifndef __ASSEMBLY__
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))
+
static inline int cache_line_size(void)
{
u32 cwg = cache_type_cwg();
--
1.9.1
More information about the linux-arm-kernel
mailing list