[PATCH v2 1/3] arm64/mm: Refactor PMD_PRESENT_INVALID and PTE_PROT_NONE bits

David Hildenbrand david at redhat.com
Mon Apr 29 09:12:57 PDT 2024


On 29.04.24 16:02, Ryan Roberts wrote:
> Currently the PMD_PRESENT_INVALID and PTE_PROT_NONE functionality
> explicitly occupy 2 bits in the PTE when PTE_VALID/PMD_SECT_VALID is
> clear. This has 2 significant consequences:
> 
>    - PTE_PROT_NONE consumes a precious SW PTE bit that could be used for
>      other things.
>    - The swap pte layout must reserve those same 2 bits and ensure they
>      are both always zero for a swap pte. It would be nice to reclaim at
>      least one of those bits.
> 
> Note that while PMD_PRESENT_INVALID technically only applies to pmds,
> the swap pte layout is common to ptes and pmds so we are currently
> effectively reserving that bit at both levels.
> 
> Let's replace PMD_PRESENT_INVALID with a more generic PTE_INVALID bit,
> which occupies the same position (bit 59) but applies uniformly to
> page/block descriptors at any level. This bit is only interpretted when

s/interpretted/interpreted/

> PTE_VALID is clear. If it is set, then the pte is still considered
> present; pte_present() returns true and all the fields in the pte follow
> the HW interpretation (e.g. SW can safely call pte_pfn(), etc). But
> crucially, the HW treats the pte as invalid and will fault if it hits.
> 
> With this in place, we can remove PTE_PROT_NONE entirely and instead
> represent PROT_NONE as a present but invalid pte (PTE_VALID=0,
> PTE_INVALID=1) with PTE_USER=0 and PTE_UXN=1. This is a unique
> combination that is not used anywhere else.
> 
> The net result is a clearer, simpler, more generic encoding scheme that
> applies uniformly to all levels. Additionally we free up a PTE SW bit a
> swap pte bit (bit 58 in both cases).
> 
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>

Not an expert on all the details, but nothing jumped at me.

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list