[PATCH v1 11/14] arm64/mm: Wire up PTE_CONT for user mappings

Catalin Marinas catalin.marinas at arm.com
Sun Jul 16 08:09:52 PDT 2023


On Tue, Jul 04, 2023 at 12:09:31PM +0100, Ryan Roberts wrote:
> On 03/07/2023 16:17, Catalin Marinas wrote:
> > Hi Ryan,
> > 
> > Some comments below. I did not have time to trim down the quoted text,
> > so you may need to scroll through it.
> 
> Thanks for the review!
> 
> Looking at the comments, I think they all relate to implementation. Does that
> imply that you are happy with the shape/approach?

I can't really tell yet as there are a few dependencies and I haven't
applied them to look at the bigger picture. My preference would be to
handle the large folio breaking/making in the core code via APIs like
set_ptes() and eliminate the loop heuristics in the arm64
code to fold/unfold. Maybe it's not entirely possible I need to look at
the bigger picture with all the series applied (and on a bigger screen,
writing this reply on a laptop in flight).

> Talking with Anshuman yesterday, he suggested putting this behind a new Kconfig
> option that defaults to disabled and also adding a command line option to
> disable it when compiled in. I think that makes sense for now at least to reduce
> risk of performance regression?

I'm fine with a Kconfig option (maybe expert) but default enabled,
otherwise it won't get enough coverage. AFAICT, the biggest risk of
regression is the heuristics for folding/unfolding. In general the
overhead should be offset by the reduced TLB pressure but we may find
some pathological case where this gets in the way.

> > On Thu, Jun 22, 2023 at 03:42:06PM +0100, Ryan Roberts wrote:
> >> +		/*
> >> +		 * No need to flush here; This is always "more permissive" so we
> >> +		 * can only be _adding_ the access or dirty bit. And since the
> >> +		 * tlb can't cache an entry without the AF set and the dirty bit
> >> +		 * is a SW bit, there can be no confusion. For HW access
> >> +		 * management, we technically only need to update the flag on a
> >> +		 * single pte in the range. But for SW access management, we
> >> +		 * need to update all the ptes to prevent extra faults.
> >> +		 */
> > 
> > On pre-DBM hardware, a PTE_RDONLY entry (writable from the kernel
> > perspective but clean) may be cached in the TLB and we do need flushing.
> 
> I don't follow; The Arm ARM says:
> 
>   IPNQBP When an Access flag fault is generated, the translation table entry
>          causing the fault is not cached in a TLB.
> 
> So the entry can only be in the TLB if AF is already 1. And given the dirty bit
> is SW, it shouldn't affect the TLB state. And this function promises to only
> change the bits so they are more permissive (so AF=0 -> AF=1, D=0 -> D=1).
> 
> So I'm not sure what case you are describing here?

The comment for this function states that it sets the access/dirty flags
as well as the write permission. Prior to DBM, the page is marked
PTE_RDONLY and we take a fault. This function marks the page dirty by
setting the software PTE_DIRTY bit (no need to worry) but also clearing
PTE_RDONLY so that a subsequent access won't fault again. We do need the
TLBI here since PTE_RDONLY is allowed to be cached in the TLB.

Sorry, I did not reply to your other comments (we can talk in person in
about a week time). I also noticed you figured the above but I had
written it already.

-- 
Catalin



More information about the linux-arm-kernel mailing list