[PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000
Anshuman Khandual
anshuman.khandual at arm.com
Thu Jun 23 21:43:14 PDT 2022
Build protect generic protection_map[] array with __P000, so that it can be
moved inside all the platforms one after the other. Otherwise there will be
build failures during this process. CONFIG_ARCH_HAS_VM_GET_PAGE_PROT cannot
be used for this purpose as only certain platforms enable this config now.
Cc: Andrew Morton <akpm at linux-foundation.org>
Cc: linux-mm at kvack.org
Cc: linux-kernel at vger.kernel.org
Suggested-by: Christophe Leroy <christophe.leroy at csgroup.eu>
Signed-off-by: Anshuman Khandual <anshuman.khandual at arm.com>
---
include/linux/mm.h | 2 ++
mm/mmap.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc8f326be0ce..47bfe038d46e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,7 +424,9 @@ extern unsigned int kobjsize(const void *objp);
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
+#ifdef __P000
extern pgprot_t protection_map[16];
+#endif
/*
* The default fault flags that should be used by most of the
diff --git a/mm/mmap.c b/mm/mmap.c
index 61e6135c54ef..b01f0280bda2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
+#ifdef __P000
pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE] = __P000,
[VM_READ] = __P001,
@@ -119,6 +120,7 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE] = __S110,
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
};
+#endif
#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
pgprot_t vm_get_page_prot(unsigned long vm_flags)
--
2.25.1
More information about the linux-arm-kernel
mailing list