[PATCH] arm64: Implement clear_pages()
Catalin Marinas
catalin.marinas at arm.com
Tue Mar 3 07:45:15 PST 2026
On Tue, Mar 03, 2026 at 02:46:34PM +0000, Will Deacon wrote:
> On Tue, Mar 03, 2026 at 11:06:13AM +0100, Linus Walleij wrote:
> > On QEMU:
> >
> > Before this patch: After this patch:
> > 2.38 GB/s 2.41 GB/s
>
> I really don't think we should pay attention to performance under QEMU
> as it doesn't necessarily have any correlation with real hardware.
I agree.
> > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
> > index b39cc1127e1f..916a3e7c9a19 100644
> > --- a/arch/arm64/include/asm/page.h
> > +++ b/arch/arm64/include/asm/page.h
> > @@ -20,7 +20,18 @@ struct page;
> > struct vm_area_struct;
> >
> > extern void copy_page(void *to, const void *from);
> > -extern void clear_page(void *to);
> > +extern void clear_pages_asm(void *addr, unsigned int nbytes);
> > +
> > +static inline void clear_pages(void *addr, unsigned int npages)
> > +{
> > + clear_pages_asm(addr, npages * PAGE_SIZE);
> > +}
> > +#define clear_pages clear_pages
>
> Hmm. From what I can tell, this just turns a branch in C code into a
> branch in assembly, so it's hard to correlate that meaningfully with
> the performance improvement you see.
>
> If we have CPUs that are this sensitive to branches, perhaps we'd be
> better off taking the opposite approach and moving more code into C
> so that the compiler can optimise the control flow for us?
I think it's more than the loop branch - the whole DCZID_EL0 read to
decide whether to use DC ZVA or STNP. I wonder why we didn't do that
with an alternative than always read the sysreg.
That said, I wouldn't mind rewriting this in C if the numbers don't get
worse. It is a bit more involved if we keep the DC ZVA use, though with
alternatives maybe not that bad (mte_set_mem_tag_range() is an example
of doing something similar in C but for clear page we don't need to deal
with unaligned boundaries).
--
Catalin
More information about the linux-arm-kernel
mailing list