[BUG 5.14] arm64/mm: dma memory mapping fails (in some cases)

Will Deacon will at kernel.org
Wed Aug 25 03:32:23 PDT 2021


On Wed, Aug 25, 2021 at 11:28:56AM +0100, Will Deacon wrote:
> On Wed, Aug 25, 2021 at 11:20:46AM +0100, Catalin Marinas wrote:
> > Given how later we are in the -rc cycle, I suggest we revert Anshuman's
> > commit 16c9afc77660 ("arm64/mm: drop HAVE_ARCH_PFN_VALID") and try to
> > assess the implications in 5.15 (the patch doesn't seem to have the
> > arm64 maintainers' ack anyway ;)).
> 
> I'll stick the revert (below) into kernelci now so we can get some coverage
> in case it breaks something else.

Bah, having said that...

> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index fcb535560028..ee70f21a79d5 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1463,15 +1463,6 @@ static inline int pfn_valid(unsigned long pfn)
>  {
>  	struct mem_section *ms;
>  
> -	/*
> -	 * Ensure the upper PAGE_SHIFT bits are clear in the
> -	 * pfn. Else it might lead to false positives when
> -	 * some of the upper bits are set, but the lower bits
> -	 * match a valid pfn.
> -	 */
> -	if (PHYS_PFN(PFN_PHYS(pfn)) != pfn)
> -		return 0;
> -

I suppose we should leave this bit as-is, since the whole point here is
trying to minimise the impact on other architectures.

Will



More information about the linux-arm-kernel mailing list