[RFC PATCH v1 0/4] Reduce cost of ptep_get_lockless on arm64

Ryan Roberts ryan.roberts at arm.com
Mon Apr 15 02:28:51 PDT 2024


On 12/04/2024 21:16, David Hildenbrand wrote:
>>
>> Yes agreed - 2 types; "lockless walkers that later recheck under PTL" and
>> "lockless walkers that never take the PTL".
>>
>> Detail: the part about disabling interrupts and TLB flush syncing is
>> arch-specifc. That's not how arm64 does it (the hw broadcasts the TLBIs). But
>> you make that clear further down.
> 
> Yes, but disabling interrupts is also required for RCU-freeing of page tables
> such that they can be walked safely. The TLB flush IPI is arch-specific and
> indeed to sync against PTE invalidation (before generic GUP-fast).
> [...]
> 
>>>>
>>>> Could it be this easy? My head is hurting...
>>>
>>> I think what has to happen is:
>>>
>>> (1) pte_get_lockless() must return the same value as ptep_get() as long as there
>>> are no races. No removal/addition of access/dirty bits etc.
>>
>> Today's arm64 ptep_get() guarantees this.
>>
>>>
>>> (2) Lockless page table walkers that later verify under the PTL can handle
>>> serious "garbage PTEs". This is our page fault handler.
>>
>> This isn't really a property of a ptep_get_lockless(); its a statement about a
>> class of users. I agree with the statement.
> 
> Yes. That's a requirement for the user of ptep_get_lockless(), such as page
> fault handlers. Well, mostly "not GUP".
> 
>>
>>>
>>> (3) Lockless page table walkers that cannot verify under PTL cannot handle
>>> arbitrary garbage PTEs. This is GUP-fast. Two options:
>>>
>>> (3a) pte_get_lockless() can atomically read the PTE: We re-check later if the
>>> atomically-read PTE is still unchanged (without PTL). No IPI for TLB flushes
>>> required. This is the common case. HW might concurrently set access/dirty bits,
>>> so we can race with that. But we don't read garbage.
>>
>> Today's arm64 ptep_get() cannot garantee that the access/dirty bits are
>> consistent for contpte ptes. That's the bit that complicates the current
>> ptep_get_lockless() implementation.
>>
>> But the point I was trying to make is that GUP-fast does not actually care about
>> *all* the fields being consistent (e.g. access/dirty). So we could spec
>> pte_get_lockless() to say that "all fields in the returned pte are guarranteed
>> to be self-consistent except for access and dirty information, which may be
>> inconsistent if a racing modification occured".
> 
> We *might* have KVM in the future want to check that a PTE is dirty, such that
> we can only allow dirty PTEs to be writable in a secondary MMU. That's not there
> yet, but one thing I was discussing on the list recently. Burried in:
> 
> https://lkml.kernel.org/r/20240320005024.3216282-1-seanjc@google.com
> 
> We wouldn't care about racing modifications, as long as MMU notifiers will
> properly notify us when the PTE would lose its dirty bits.
> 
> But getting false-positive dirty bits would be problematic.
> 
>>
>> This could mean that the access/dirty state *does* change for a given page while
>> GUP-fast is walking it, but GUP-fast *doesn't* detect that change. I *think*
>> that failing to detect this is benign.
> 
> I mean, HW could just set the dirty/access bit immediately after the check. So
> if HW concurrently sets the bit and we don't observe that change when we
> recheck, I think that would be perfectly fine.

Yes indeed; that's my point - GUP-fast doesn't care about access/dirty (or
soft-dirty or uffd-wp).

But if you don't want to change the ptep_get_lockless() spec to explicitly allow
this (because you have the KVM use case where false-positive dirty is
problematic), then I think we are stuck with ptep_get_lockless() as implemented
for arm64 today.

> 
>>
>> Aside: GUP-fast currently rechecks the pte originally obtained with
>> ptep_get_lockless(), using ptep_get(). Is that correct? ptep_get() must conform
>> to (1), so either it returns the same pte or it returns a different pte or
>> garbage. But that garbage could just happen to be the same as the originally
>> obtained pte. So in that case, it would have a false match. I think this needs
>> to be changed to ptep_get_lockless()?
> 
> I *think* it's fine, because the case where it would make a difference (x86-PAE)
> still requires the TLB flush IPI to sync against PTE changes, and that check
> would likely be wrong in one way or the other. So for x86-pae, that check is
> just moot either way.
> 
> That my theory, at least.
> 
> (but this "let's fake-read atomically although we don't, but let's do like we
> could in some specific circumstances" is really hard to get)
> 
> I was wondering a while ago if we are missing a memory barrier before the checl,
> but I think the one from obtaining the page reference gets the job done (at
> least that's what I remember).
> 




More information about the linux-arm-kernel mailing list