[PATCH v2] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary
David Hildenbrand
david at redhat.com
Thu Jan 25 10:05:13 PST 2024
On 25.01.24 18:35, Ryan Roberts wrote:
> Since the high bits [51:48] of an OA are not stored contiguously in the
> PTE, there is a theoretical bug in set_ptes(), which just adds PAGE_SIZE
> to the pte to get the pte with the next pfn. This works until the pfn
> crosses the 48-bit boundary, at which point we overflow into the upper
> attributes.
>
> Of course one could argue (and Matthew Wilcox has :) that we will never
> see a folio cross this boundary because we only allow naturally aligned
> power-of-2 allocation, so this would require a half-petabyte folio. So
> its only a theoretical bug. But its better that the code is robust
> regardless.
>
> I've implemented pte_next_pfn() as part of the fix, which is an opt-in
> core-mm interface. So that is now available to the core-mm, which will
> be needed shortly to support forthcoming fork()-batching optimizations.
>
> Fixes: 4a169d61c2ed ("arm64: implement the new page table range API")
> Closes: https://lore.kernel.org/linux-mm/fdaeb9a5-d890-499a-92c8-d171df43ad01@arm.com/
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
> ---
>
> Hi All,
>
> This applies on top of v6.8-rc1. It's a dependency for David's fork-batch
> work, so once Catalin has acked it, it will go through mm-unstable attached to
> David's series.
>
Reviewed-by: David Hildenbrand <david at redhat.com>
I'll include that in my fork() batching series v2 that I'll probably
send out tomorrow.
--
Cheers,
David / dhildenb
More information about the linux-arm-kernel
mailing list