[PATCH v3] arm64: mm: fix linear mapping mem access performance degradation
Catalin Marinas
catalin.marinas at arm.com
Fri Jul 1 10:24:28 PDT 2022
On Thu, Jun 30, 2022 at 06:50:22PM +0800, Guanghui Feng wrote:
> +static void init_pmd_remap(pud_t *pudp, unsigned long addr, unsigned long end,
> + phys_addr_t phys, pgprot_t prot,
> + phys_addr_t (*pgtable_alloc)(int), int flags)
> +{
> + unsigned long next;
> + pmd_t *pmdp;
> + phys_addr_t map_offset;
> + pmdval_t pmdval;
> +
> + pmdp = pmd_offset(pudp, addr);
> + do {
> + next = pmd_addr_end(addr, end);
> +
> + if (!pmd_none(*pmdp) && pmd_sect(*pmdp)) {
> + phys_addr_t pte_phys = pgtable_alloc(PAGE_SHIFT);
> + pmd_clear(pmdp);
> + pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
> + if (flags & NO_EXEC_MAPPINGS)
> + pmdval |= PMD_TABLE_PXN;
> + __pmd_populate(pmdp, pte_phys, pmdval);
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
This doesn't follow the architecture requirements for "break before
make" when changing live page tables. While it may work, it risks
triggering a TLB conflict abort. The correct sequence normally is:
pmd_clear();
flush_tlb_kernel_range();
__pmd_populate();
However, do we have any guarantees that the kernel doesn't access the
pmd range being unmapped temporarily? The page table itself might live
in one of these sections, so set_pmd() etc. can get a translation fault.
--
Catalin
More information about the linux-arm-kernel
mailing list