[PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

Chintan Pandya cpandya at codeaurora.org
Sun Mar 18 21:26:21 PDT 2018



On 3/16/2018 8:20 PM, Kani, Toshi wrote:
> On Fri, 2018-03-16 at 13:10 +0530, Chintan Pandya wrote:
>>
>> On 3/15/2018 9:42 PM, Kani, Toshi wrote:
>>> On Thu, 2018-03-15 at 18:15 +0530, Chintan Pandya wrote:
>>>> Huge mapping changes PMD/PUD which could have
>>>> valid previous entries. This requires proper
>>>> TLB maintanance on some architectures, like
>>>> ARM64.
>>>>
>>>> Implent BBM (break-before-make) safe TLB
>>>> invalidation.
>>>>
>>>> Here, I've used flush_tlb_pgtable() instead
>>>> of flush_kernel_range() because invalidating
>>>> intermediate page_table entries could have
>>>> been optimized for specific arch. That's the
>>>> case with ARM64 at least.
>>>>
>>>> Signed-off-by: Chintan Pandya <cpandya at codeaurora.org>
>>>> ---
>>>>    lib/ioremap.c | 25 +++++++++++++++++++------
>>>>    1 file changed, 19 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/lib/ioremap.c b/lib/ioremap.c
>>>> index 54e5bba..55f8648 100644
>>>> --- a/lib/ioremap.c
>>>> +++ b/lib/ioremap.c
>>>> @@ -13,6 +13,7 @@
>>>>    #include <linux/export.h>
>>>>    #include <asm/cacheflush.h>
>>>>    #include <asm/pgtable.h>
>>>> +#include <asm-generic/tlb.h>
>>>>    
>>>>    #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>>>>    static int __read_mostly ioremap_p4d_capable;
>>>> @@ -80,6 +81,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
>>>>    		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
>>>>    {
>>>>    	pmd_t *pmd;
>>>> +	pmd_t old_pmd;
>>>>    	unsigned long next;
>>>>    
>>>>    	phys_addr -= addr;
>>>> @@ -91,10 +93,15 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
>>>>    
>>>>    		if (ioremap_pmd_enabled() &&
>>>>    		    ((next - addr) == PMD_SIZE) &&
>>>> -		    IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
>>>> -		    pmd_free_pte_page(pmd)) {
>>>> -			if (pmd_set_huge(pmd, phys_addr + addr, prot))
>>>> +		    IS_ALIGNED(phys_addr + addr, PMD_SIZE)) {
>>>> +			old_pmd = *pmd;
>>>> +			pmd_clear(pmd);
>>>
>>> pmd_clear() is one of the operations pmd_free_pte_page() needs to do.
>>> See the x86 version.
>>>
>>>> +			flush_tlb_pgtable(&init_mm, addr);
>>>
>>> You can call it in pmd_free_pte_page() on arm64 as well.
>>>
>>>> +			if (pmd_set_huge(pmd, phys_addr + addr, prot)) {
>>>> +				pmd_free_pte_page(&old_pmd);
>>>>    				continue;
>>>> +			} else
>>>> +				set_pmd(pmd, old_pmd);
>>>
>>> I do not understand why you needed to make this change.
>>> pmd_free_pte_page() is defined as an arch-specific function so that you
>>> can additionally perform TLB purges on arm64.  Please try to make proper
>>> arm64 implementation of this interface.  And if you find any issue in
>>> this interface, please let me know.
>>
>> TLB ops require VA at least. And this interface passes just the PMD/PUD.
> 
> You can add 'addr' as the 2nd arg.  Such minor tweak is expected when
> implementing on multiple arches.
> 
>> Second is, if we clear the previous table entry inside the arch specific
>> code and then we fail in pmd/pud_set_huge, we can't fallback (x86 case).
>>
>> So, we can do something like this (following Mark's suggestion),
>>
>> 	if (ioremap_pmd_enabled() &&
>>           	((next - addr) == PMD_SIZE) &&
>> 		IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
>> 		pmd_can_set_huge(pmd, phys_addr + addr, prot)) {
>> 			/*
>> 			 * Clear existing table entry,
>> 			 * Invalidate,
>> 			 * Free the page table
>> 			 * inside this code
>> 			 */
>> 			pmd_free_pte_page(pmd, addr, addr + PMD_SIZE);
>> 			pmd_set_huge(...) //without fail
>> 			continue;
>> 	}
> 
> That's not necessary.  pmd being none is a legitimate state.  In fact,
> it is the case when pmd_alloc() allocated and populated a new pmd.

Alright. I'll send v3 today.

> 
> Thanks,
> -Toshi
> 

Chintan
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center,
Inc. is a member of the Code Aurora Forum, a Linux Foundation
Collaborative Project



More information about the linux-arm-kernel mailing list