[PATCH master 4/4] ARM: mmu: invalidate when mapping range uncached

Ahmad Fatoum a.fatoum at pengutronix.de
Fri May 26 05:49:19 PDT 2023


On 26.05.23 08:33, Ahmad Fatoum wrote:
> memtest can call remap_range to map regions being tested as uncached,
> but remap_range did not take care to evict any stale cache lines.
> Do this now.
> 
> This fixes an issue of SELFTEST_MMU failing on an i.MX8MN, when running
> memtest on an uncached region that was previously memtested while being
> cached.
> 
> Fixes: 3100ea146688 ("ARM: rework MMU support")
> Fixes: 7cc98fbb6128 ("arm: cpu: add basic arm64 mmu support")
> Signed-off-by: Ahmad Fatoum <a.fatoum at pengutronix.de>
> ---
>  arch/arm/cpu/mmu_32.c | 4 ++++
>  arch/arm/cpu/mmu_64.c | 4 ++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index a324ebf71a55..c4e5a3bb0ab2 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -327,6 +327,10 @@ static void early_remap_range(u32 addr, size_t size, unsigned map_type)
>  int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
>  {
>  	__arch_remap_range(virt_addr, phys_addr, size, map_type);
> +
> +	if (map_type == MAP_UNCACHED)
> +		dma_inv_range(virt_addr, size);
> +
>  	return 0;
>  }
>  
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index 940e0e914c43..63e70963224a 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -191,6 +191,10 @@ int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsign
>  		return -EINVAL;
>  
>  	create_sections((uint64_t)virt_addr, phys_addr, (uint64_t)size, attrs);
> +
> +	if (flags == MAP_UNCACHED)
> +		dma_inv_range(virt_addr, size);

Lucas mentioned that it may be surprising that a remap would result in data loss
and suggested to flush the range before remapping. This seems non-trivial,
because dma_flush_range (which works on virtual addresses) causes a DataAbort
when called on the null page or on an OP-TEE region:

DABT (current EL) exception (ESR 0x9600014a) at 0x000000007e000000
elr: 000000007dd96054 lr : 000000007dd95c74
x0 : 000000007e000000 x1 : 000000007fbfffff
x2 : 0000000000000040 x3 : 000000000000003f
x4 : 0000000000000020 x5 : 000000007dff7908
x6 : 0000000000000030 x7 : 000000007dff7a4a
x8 : 0000000000000000 x9 : 0000000000000000
x10: 0000000000000000 x11: 00000000fffffffd
x12: 00000000fffffffd x13: 0000000000000020
x14: 0000000000000000 x15: 0000000000000001
x16: 000000007dff7908 x17: 0000000000000001
x18: 000000007dff7e90 x19: 000000007e000000
x20: 0000000001c00000 x21: 0000000000000000
x22: 0040000000000600 x23: 000000007e000000
x24: 000000004002b360 x25: 000000007dfe0000
x26: 000000000008ae15 x27: 000000005f0f8de0
x28: 0000000000000000 x29: 000000007dff7e80

Call trace:
[<7dd96054>] (v8_flush_dcache_range+0x1c/0x34) from [<7dd95ce0>] (arch_remap_range+0x64/0xa8)
[<7dd95ce0>] (arch_remap_range+0x64/0xa8) from [<7dd95e04>] (__mmu_init+0xe0/0x10c)
[<7dd95e04>] (__mmu_init+0xe0/0x10c) from [<7dd94bc0>] (mmu_init+0x7c/0xa4)

The alternative would be flushing everything by set/way (perhaps too costly),
figure out why flushing causes an abort or start doing accounting which ranges
are mapped how, so we don't need to iterate over all pages.

Given that existing users don't mind a remap deleting data:

  - either they manually flush beforehand like dma_alloc_coherent
  - or they always write before read (memtest, socfpga fncpy)
  - or they are legacy non-DT enabled framebuffer driver that don't zero-initialize
    their framebuffer anyway
  - They aren't writable (HAB ROM)

I'd suggest we take the patch as-is because it fixes a bug and upstream
users are not affected by the side-effect.

Cheers,
Ahmad
  

> +
>  	return 0;
>  }
>  

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |




More information about the barebox mailing list