[PATCH] arm: align shared memory unconditionally to the SHMLBA boundary

Russell King - ARM Linux linux at arm.linux.org.uk
Tue Jul 16 07:07:21 EDT 2013


On Tue, Jul 16, 2013 at 02:47:45PM +0400, Alexander Kartashov wrote:
> On 07/16/2013 02:36 PM, Russell King - ARM Linux wrote:
>> shmget() doesn't allocate space in the process for the SHM region.  It
>> merely creates the shm memory and returns an identifier for it which can
>> later be used by shmat() to map it.
> Thank you for the correction, I meant shmat() in that comment indeed.
> I'm sorry for the inconvenience.

Right, so it appears that there's a difference between shmat() with an
address and with a NULL address, because the enforcement is done by two
completely different bits of code in unrelated parts of the kernel.

I notice Sparc32 seems to have a fix for this along the lines of the
(untested) patch below, so let's do the same on ARM.  Please test this
and let me know if this solves your problem.

Thanks.

 arch/arm/include/asm/shmparam.h |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/shmparam.h b/arch/arm/include/asm/shmparam.h
index a5223b3..843db59 100644
--- a/arch/arm/include/asm/shmparam.h
+++ b/arch/arm/include/asm/shmparam.h
@@ -1,16 +1,16 @@
 #ifndef _ASMARM_SHMPARAM_H
 #define _ASMARM_SHMPARAM_H
 
+#include <asm/cachetype.h>
+
 /*
  * This should be the size of the virtually indexed cache/ways,
  * or page size, whichever is greater since the cache aliases
  * every size/ways bytes.
  */
-#define	SHMLBA	(4 * PAGE_SIZE)		 /* attach addr a multiple of this */
+#define	SHMLBA	(cache_is_vipt_aliasing() ? (4 * PAGE_SIZE) : PAGE_SIZE)
 
-/*
- * Enforce SHMLBA in shmat
- */
+/* Enforce SHMLBA in shmat */
 #define __ARCH_FORCE_SHMLBA
 
 #endif /* _ASMARM_SHMPARAM_H */



More information about the linux-arm-kernel mailing list