[PATCH v1] mm/contpte: Optimize loop to reduce redundant operations
Dev Jain
dev.jain at arm.com
Fri Apr 11 10:30:41 PDT 2025
+others
On 11/04/25 2:55 am, Barry Song wrote:
> On Mon, Apr 7, 2025 at 9:23 PM Xavier <xavier_qy at 163.com> wrote:
>>
>> This commit optimizes the contpte_ptep_get function by adding early
>> termination logic. It checks if the dirty and young bits of orig_pte
>> are already set and skips redundant bit-setting operations during
>> the loop. This reduces unnecessary iterations and improves performance.
>>
>> Signed-off-by: Xavier <xavier_qy at 163.com>
>> ---
>> arch/arm64/mm/contpte.c | 13 +++++++++++--
>> 1 file changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>> index bcac4f55f9c1..ca15d8f52d14 100644
>> --- a/arch/arm64/mm/contpte.c
>> +++ b/arch/arm64/mm/contpte.c
>> @@ -163,17 +163,26 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
>>
>> pte_t pte;
>> int i;
>> + bool dirty = false;
>> + bool young = false;
>>
>> ptep = contpte_align_down(ptep);
>>
>> for (i = 0; i < CONT_PTES; i++, ptep++) {
>> pte = __ptep_get(ptep);
>>
>> - if (pte_dirty(pte))
>> + if (!dirty && pte_dirty(pte)) {
>> + dirty = true;
>> orig_pte = pte_mkdirty(orig_pte);
>> + }
>>
>> - if (pte_young(pte))
>> + if (!young && pte_young(pte)) {
>> + young = true;
>> orig_pte = pte_mkyoung(orig_pte);
>> + }
>> +
>> + if (dirty && young)
>> + break;
>
> This kind of optimization is always tricky. Dev previously tried a similar
> approach to reduce the loop count, but it ended up causing performance
> degradation:
> https://lore.kernel.org/linux-mm/20240913091902.1160520-1-dev.jain@arm.com/
>
> So we may need actual data to validate this idea.
The original v2 patch does not work, I changed it to the following:
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index bcac4f55f9c1..db0ad38601db 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -152,6 +152,16 @@ void __contpte_try_unfold(struct mm_struct *mm,
unsigned long addr,
}
EXPORT_SYMBOL_GPL(__contpte_try_unfold);
+#define CHECK_CONTPTE_FLAG(start, ptep, orig_pte, flag) \
+ int _start; \
+ pte_t *_ptep = ptep; \
+ for (_start = start; _start < CONT_PTES; _start++, ptep++) { \
+ if (pte_##flag(__ptep_get(_ptep))) { \
+ orig_pte = pte_mk##flag(orig_pte); \
+ break; \
+ } \
+ }
+
pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
{
/*
@@ -169,11 +179,17 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
for (i = 0; i < CONT_PTES; i++, ptep++) {
pte = __ptep_get(ptep);
- if (pte_dirty(pte))
+ if (pte_dirty(pte)) {
orig_pte = pte_mkdirty(orig_pte);
+ CHECK_CONTPTE_FLAG(i, ptep, orig_pte, young);
+ break;
+ }
- if (pte_young(pte))
+ if (pte_young(pte)) {
orig_pte = pte_mkyoung(orig_pte);
+ CHECK_CONTPTE_FLAG(i, ptep, orig_pte, dirty);
+ break;
+ }
}
return orig_pte;
Some rudimentary testing with micromm reveals that this may be
*slightly* faster. I cannot say for sure yet.
>
>> }
>>
>> return orig_pte;
>> --
>> 2.34.1
>>
>
> Thanks
> Barry
>
More information about the linux-arm-kernel
mailing list