[PATCH 1/2] ARM: implement support for read-mostly sections
Catalin Marinas
catalin.marinas at arm.com
Sun Dec 5 17:18:27 EST 2010
On 5 December 2010 11:43, Russell King - ARM Linux
<linux at arm.linux.org.uk> wrote:
> As our SMP implementation uses MESI protocols. Grouping together data
> which is mostly only read together means that we avoid unnecessary
> cache line bouncing when this code shares a cache line with other data.
>
> In other words, cache lines associated with read-mostly data are
> expected to spend most of their time in shared state.
>
> Signed-off-by: Russell King <rmk+kernel at arm.linux.org.uk>
> ---
> arch/arm/include/asm/cache.h | 2 ++
> arch/arm/kernel/vmlinux.lds.S | 1 +
> 2 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/arch/arm/include/asm/cache.h b/arch/arm/include/asm/cache.h
> index 9d61220..75fe66b 100644
> --- a/arch/arm/include/asm/cache.h
> +++ b/arch/arm/include/asm/cache.h
> @@ -23,4 +23,6 @@
> #define ARCH_SLAB_MINALIGN 8
> #endif
>
> +#define __read_mostly __attribute__((__section__(".data..read_mostly")))
> +
> #endif
> diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
> index cead889..1581f6d 100644
> --- a/arch/arm/kernel/vmlinux.lds.S
> +++ b/arch/arm/kernel/vmlinux.lds.S
> @@ -167,6 +167,7 @@ SECTIONS
>
> NOSAVE_DATA
> CACHELINE_ALIGNED_DATA(32)
> + READ_MOSTLY_DATA(32)
Should we change the alignments to 64?
--
Catalin
More information about the linux-arm-kernel
mailing list