[PATCH v2] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary

Catalin Marinas catalin.marinas at arm.com
Thu Jan 25 09:43:59 PST 2024


On Thu, Jan 25, 2024 at 05:35:34PM +0000, Ryan Roberts wrote:
> Since the high bits [51:48] of an OA are not stored contiguously in the
> PTE, there is a theoretical bug in set_ptes(), which just adds PAGE_SIZE
> to the pte to get the pte with the next pfn. This works until the pfn
> crosses the 48-bit boundary, at which point we overflow into the upper
> attributes.
> 
> Of course one could argue (and Matthew Wilcox has :) that we will never
> see a folio cross this boundary because we only allow naturally aligned
> power-of-2 allocation, so this would require a half-petabyte folio. So
> its only a theoretical bug. But its better that the code is robust
> regardless.
> 
> I've implemented pte_next_pfn() as part of the fix, which is an opt-in
> core-mm interface. So that is now available to the core-mm, which will
> be needed shortly to support forthcoming fork()-batching optimizations.
> 
> Fixes: 4a169d61c2ed ("arm64: implement the new page table range API")
> Closes: https://lore.kernel.org/linux-mm/fdaeb9a5-d890-499a-92c8-d171df43ad01@arm.com/
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
> ---
> 
> Hi All,
> 
> This applies on top of v6.8-rc1. It's a dependency for David's fork-batch
> work, so once Catalin has acked it, it will go through mm-unstable attached to
> David's series.

Reviewed-by: Catalin Marinas <catalin.marinas at arm.com>



More information about the linux-arm-kernel mailing list