[PATCH] arm64: mm: Add pgd_page to support RCU fast_gup

Jungseok Lee jungseoklee85 at gmail.com
Tue Dec 23 07:45:41 PST 2014

On Dec 24, 2014, at 12:25 AM, Catalin Marinas wrote:
> On Sun, Dec 21, 2014 at 10:56:33AM +0000, Will Deacon wrote:
>> On Sat, Dec 20, 2014 at 12:49:40AM +0000, Jungseok Lee wrote:
>>> This patch adds pgd_page definition in order to keep supporting
>>> HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page
>>> expression to align with pmd_page for readability.
>>> An introduction of pgd_page resolves the following build breakage
>>> under 4KB + 4Level memory management combo.
>>> mm/gup.c: In function 'gup_huge_pgd':
>>> mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration]
>>>  head = pgd_page(orig);
>>>  ^
>>> mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast
>>>  head = pgd_page(orig);
>>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>>> Cc: Will Deacon <will.deacon at arm.com>
>>> Cc: Steve Capper <steve.capper at linaro.org>
>>> Signed-off-by: Jungseok Lee <jungseoklee85 at gmail.com>
>>> ---
>>> arch/arm64/include/asm/pgtable.h | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index df22314..a1fe927 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -401,7 +401,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
>>> 	return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
>>> }
>>> -#define pud_page(pud)           pmd_page(pud_pmd(pud))
>>> +#define pud_page(pud)		pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK))
>> It would be cleaner to use pud_pfn here, otherwise we end up passing a
>> physical address with the lower attributes present to __phys_to_pfn, which
>> "knows" to shift them away.
> OK, I tried this, together with aligning pmd_page to use pmd_pfn, and
> after debugging I realised why it is a bad idea. I had a feeling that I
> tried this before but didn't remember the details.
> I'll take the pmd_page() example, it is used in two scenarios:
> a) getting the first page structure of a huge page
> b) getting the page structure of the pte page pointed at by the pmd
> In the first case, we know that the pmd value is aligned to PMD_SIZE as
> it points to the physical address of a huge page, so pmd_pfn() masking
> is fine. In the second case, such masking is not, a pte page can be just
> PAGE_SIZE aligned. So using pmd_pfn to mask out the bits above 12 in the
> pmd val breaks case (b) above.
> While we don't have such use-case (b) for pud_page(), I would rather
> keep them consistent with pmd_page.
> A better fix would be to change such pmd_page() usage to pmd_pgtable()
> in the core mm code and have a different implementation for the latter.
> I guess we can leave this exercise to Steve ;).
> In the meantime, I'll merge Jungseok's original patch.

After reading this mail, I could imagine what happen to pmd_page().

Jungseok Lee

More information about the linux-arm-kernel mailing list