[PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings
Ryan Roberts
ryan.roberts at arm.com
Tue Nov 28 03:58:25 PST 2023
On 28/11/2023 03:13, Yang Shi wrote:
> On Mon, Nov 27, 2023 at 1:15 AM Ryan Roberts <ryan.roberts at arm.com> wrote:
>>
>> On 27/11/2023 03:18, Barry Song wrote:
>>>> Ryan Roberts (14):
>>>> mm: Batch-copy PTE ranges during fork()
>>>> arm64/mm: set_pte(): New layer to manage contig bit
>>>> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit
>>>> arm64/mm: pte_clear(): New layer to manage contig bit
>>>> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit
>>>> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit
>>>> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit
>>>> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit
>>>> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit
>>>> arm64/mm: ptep_get(): New layer to manage contig bit
>>>> arm64/mm: Split __flush_tlb_range() to elide trailing DSB
>>>> arm64/mm: Wire up PTE_CONT for user mappings
>>>> arm64/mm: Implement ptep_set_wrprotects() to optimize fork()
>>>> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown
>>>
>>> Hi Ryan,
>>> Not quite sure if I missed something, are we splitting/unfolding CONTPTES
>>> in the below cases
>>
>> The general idea is that the core-mm sets the individual ptes (one at a time if
>> it likes with set_pte_at(), or in a block with set_ptes()), modifies its
>> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears them
>> (ptep_clear(), etc); This is exactly the same interface as previously.
>>
>> BUT, the arm64 implementation of those interfaces will now detect when a set of
>> adjacent PTEs (a contpte block - so 16 naturally aligned entries when using 4K
>> base pages) are all appropriate for having the CONT_PTE bit set; in this case
>> the block is "folded". And it will detect when the first PTE in the block
>> changes such that the CONT_PTE bit must now be unset ("unfolded"). One of the
>> requirements for folding a contpte block is that all the pages must belong to
>> the *same* folio (that means its safe to only track access/dirty for thecontpte
>> block as a whole rather than for each individual pte).
>>
>> (there are a couple of optimizations that make the reality slightly more
>> complicated than what I've just explained, but you get the idea).
>>
>> On that basis, I believe all the specific cases you describe below are all
>> covered and safe - please let me know if you think there is a hole here!
>>
>>>
>>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large folio
>>
>> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(), or
>> whatever). The implementation of that will cause an unfold and the CONT_PTE bit
>> is removed from the whole contpte block. If there is then a subsequent
>> set_pte_at() to set a swap entry, the implementation will see that its not
>> appropriate to re-fold, so the range will remain unfolded.
>>
>>>
>>> 2. vma split in a large folio due to various reasons such as mprotect,
>>> munmap, mlock etc.
>>
>> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VMA? I
>> suspect not, so if the VMA is split in the middle of a currently folded contpte
>> block, it will remain folded. But this is safe and continues to work correctly.
>> The VMA arrangement is not important; it is just important that a single folio
>> is mapped contiguously across the whole block.
>
> Even with different permissions, for example, read-only vs read-write?
> The mprotect() may change the permission. It should be misprogramming
> per ARM ARM.
If the permissions are changed, then mprotect() must have called the pgtable
helpers to modify the page table (e.g. ptep_set_wrprotect(),
ptep_set_access_flags() or whatever). These functions will notice that the
contpte block is currently folded and unfold it before apply the permissions
change. The unfolding process is done in a way that intentionally avoids
misprogramming as defined by the Arm ARM. See contpte_fold() in contpte.c.
>
>>
>>>
>>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one
>>> rather than being as a whole.
>>
>> Yes, as per 1; the arm64 implementation will notice when the first entry is
>> cleared and unfold the contpte block.
>>
>>>
>>> In hardware, we need to make sure CONTPTE follow the rule - always 16
>>> contiguous physical address with CONTPTE set. if one of them run away
>>> from the 16 ptes group and PTEs become unconsistent, some terrible
>>> errors/faults can happen in HW. for example
>>
>> Yes, the implementation obeys all these rules; see contpte_try_fold() and
>> contpte_try_unfold(). the fold/unfold operation is only done when all
>> requirements are met, and we perform it in a manner that is conformant to the
>> architecture requirements (see contpte_fold() - being renamed to
>> contpte_convert() in the next version).
>>
>> Thanks for the review!
>>
>> Thanks,
>> Ryan
>>
>>>
>>> case0:
>>> addr0 PTE - has no CONTPE
>>> addr0+4kb PTE - has CONTPTE
>>> ....
>>> addr0+60kb PTE - has CONTPTE
>>>
>>> case 1:
>>> addr0 PTE - has no CONTPE
>>> addr0+4kb PTE - has CONTPTE
>>> ....
>>> addr0+60kb PTE - has swap
>>>
>>> Unconsistent 16 PTEs will lead to crash even in the firmware based on
>>> our observation.
>>>
>>> Thanks
>>> Barry
>>>
>>>
>>
>>
More information about the linux-arm-kernel
mailing list