[PATCH v2 1/2] ARM: ioremap: Sync PGDs for VMALLOC shadow
Mark Rutland
mark.rutland at arm.com
Thu Oct 17 03:30:23 PDT 2024
On Wed, Oct 16, 2024 at 09:15:21PM +0200, Linus Walleij wrote:
> When sync:ing the VMALLOC area to other CPUs, make sure to also
> sync the KASAN shadow memory for the VMALLOC area, so that we
> don't get stale entries for the shadow memory in the top level PGD.
>
> Since we are now copying PGDs in two instances, create a helper
> function named memcpy_pgd() to do the actual copying, and
> create a helper to map the addresses of VMALLOC_START and
> VMALLOC_END into the corresponding shadow memory.
>
> Cc: stable at vger.kernel.org
> Fixes: 565cbaad83d8 ("ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC")
> Link: https://lore.kernel.org/linux-arm-kernel/a1a1d062-f3a2-4d05-9836-3b098de9db6d@foss.st.com/
> Reported-by: Clement LE GOFFIC <clement.legoffic at foss.st.com>
> Suggested-by: Mark Rutland <mark.rutland at arm.com>
> Suggested-by: Russell King (Oracle) <linux at armlinux.org.uk>
> Signed-off-by: Linus Walleij <linus.walleij at linaro.org>
> ---
> arch/arm/mm/ioremap.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
> index 794cfea9f9d4..94586015feed 100644
> --- a/arch/arm/mm/ioremap.c
> +++ b/arch/arm/mm/ioremap.c
> @@ -23,6 +23,7 @@
> */
> #include <linux/module.h>
> #include <linux/errno.h>
> +#include <linux/kasan.h>
> #include <linux/mm.h>
> #include <linux/vmalloc.h>
> #include <linux/io.h>
> @@ -115,16 +116,32 @@ int ioremap_page(unsigned long virt, unsigned long phys,
> }
> EXPORT_SYMBOL(ioremap_page);
>
> +static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
> +{
> + return (unsigned long)kasan_mem_to_shadow((void *)addr);
> +}
> +
> +static void memcpy_pgd(struct mm_struct *mm, unsigned long start,
> + unsigned long end)
> +{
> + memcpy(pgd_offset(mm, start), pgd_offset_k(start),
> + sizeof(pgd_t) * (pgd_index(end) - pgd_index(start)));
> +}
> +
> void __check_vmalloc_seq(struct mm_struct *mm)
> {
> int seq;
>
> do {
> seq = atomic_read(&init_mm.context.vmalloc_seq);
> - memcpy(pgd_offset(mm, VMALLOC_START),
> - pgd_offset_k(VMALLOC_START),
> - sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
> - pgd_index(VMALLOC_START)));
> + memcpy_pgd(mm, VMALLOC_START, VMALLOC_END);
> + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> + unsigned long start =
> + arm_kasan_mem_to_shadow(VMALLOC_START);
> + unsigned long end =
> + arm_kasan_mem_to_shadow(VMALLOC_END);
> + memcpy_pgd(mm, start, end);
> + }
This looks good; FWIW:
Acked-by: Mark Rutland <mark.rutland at arm.com>
As a separate thing, I believe we also need to use atomic_read_acquire()
for the reads of vmalloc_seq to pair with the atomic_*_release() on each
update.
Otherwise, this can be reordered, e.g.
do {
memcpy_pgd(...);
seq = atomic_read(&init_mm.context.vmalloc_seq);
atomic_set_release(&mm->context.vmalloc_seq, seq);
} while (seq != atomic_read(&init_mm.context.vmalloc_seq)
... and we might fail to copy the relevant table entries from init_mm,
but still think we're up-to-date and update mm's vmalloc_seq.
Ard, does that sound right to you, or am I missing something?
Mark.
More information about the linux-arm-kernel
mailing list