[PATCH v3 12/15] arm64/mm: Split __flush_tlb_range() to elide trailing DSB

Will Deacon will at kernel.org
Thu Dec 14 04:13:36 PST 2023


On Thu, Dec 14, 2023 at 11:53:52AM +0000, Ryan Roberts wrote:
> On 12/12/2023 11:47, Ryan Roberts wrote:
> > On 12/12/2023 11:35, Will Deacon wrote:
> >> On Mon, Dec 04, 2023 at 10:54:37AM +0000, Ryan Roberts wrote:
> >>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> >>> index bb2c2833a987..925ef3bdf9ed 100644
> >>> --- a/arch/arm64/include/asm/tlbflush.h
> >>> +++ b/arch/arm64/include/asm/tlbflush.h
> >>> @@ -399,7 +399,7 @@ do {									\
> >>>  #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
> >>>  	__flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false)
> >>>  
> >>> -static inline void __flush_tlb_range(struct vm_area_struct *vma,
> >>> +static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma,
> >>>  				     unsigned long start, unsigned long end,
> >>>  				     unsigned long stride, bool last_level,
> >>>  				     int tlb_level)
> >>> @@ -431,10 +431,19 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
> >>>  	else
> >>>  		__flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true);
> >>>  
> >>> -	dsb(ish);
> >>>  	mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
> >>>  }
> >>>  
> >>> +static inline void __flush_tlb_range(struct vm_area_struct *vma,
> >>> +				     unsigned long start, unsigned long end,
> >>> +				     unsigned long stride, bool last_level,
> >>> +				     int tlb_level)
> >>> +{
> >>> +	__flush_tlb_range_nosync(vma, start, end, stride,
> >>> +				 last_level, tlb_level);
> >>> +	dsb(ish);
> >>> +}
> >>
> >> Hmm, are you sure it's safe to defer the DSB until after the secondary TLB
> >> invalidation? It will have a subtle effect on e.g. an SMMU participating
> >> in broadcast TLB maintenance, because now the ATC will be invalidated
> >> before completion of the TLB invalidation and it's not obviously safe to me.
> > 
> > I'll be honest; I don't know that it's safe. The notifier calls turned up during
> > a rebase and I stared at it for a while, before eventually concluding that I
> > should just follow the existing pattern in __flush_tlb_page_nosync(): That one
> > calls the mmu notifier without the dsb, then flush_tlb_page() does the dsb
> > after. So I assumed it was safe.
> > 
> > If you think it's not safe, I guess there is a bug to fix in
> > __flush_tlb_page_nosync()?
> 
> Did you have an opinion on this? I'm just putting together a v4 of this series,
> and I'll remove this optimization if you think it's unsound. But in that case, I
> guess we have an existing bug to fix too?

Sorry, Ryan, I've not had a chance to look into it in more detail. But as
you rightly point out, you're not introducing the issue (assuming it is
one), so I don't think it needs to hold you up. Your code just makes the
thing more "obvious" to me.

Robin, Jean-Philippe -- do we need to make sure that the SMMU has completed
its TLB invalidation before issuing an ATC invalidate? My half-baked worry
is whether or not an ATS request could refill the ATC before the TLBI
has completed, therefore rendering the ATC invalidation useless.

Will



More information about the linux-arm-kernel mailing list