[PATCH] KVM: arm64: Move data barrier to end of split walk
Will Deacon
will at kernel.org
Fri Jul 19 07:58:51 PDT 2024
[+Ricardo, as he wrote the original split walker]
On Thu, Jul 18, 2024 at 10:35:19PM +0000, Colton Lewis wrote:
> Moving the data barrier from stage2_split_walker to after the walk is
> finished in kvm_pgtable_stage2_split results in a roughly 70%
> reduction in Clear Dirty Log Time in dirty_log_perf_test (modified to
> use eager page splitting) when using huge pages. This gain holds
> steady through a range of vcpus used (tested 1-64) and memory
> used (tested 1-64GB).
>
> This is safe to do because nothing else is using the page tables while
> they are still being mapped and this is how other page table walkers
> already function. None of them have a data barrier in the walker
> itself.
>
> Signed-off-by: Colton Lewis <coltonlewis at google.com>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 9e2bbee77491..9788af2ca8c0 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -1547,7 +1547,6 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
> */
> new = kvm_init_table_pte(childp, mm_ops);
> stage2_make_pte(ctx, new);
> - dsb(ishst);
> return 0;
> }
>
> @@ -1559,8 +1558,11 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
> .flags = KVM_PGTABLE_WALK_LEAF,
> .arg = mc,
> };
> + int ret;
>
> - return kvm_pgtable_walk(pgt, addr, size, &walker);
> + ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> + dsb(ishst);
> + return ret;
> }
This looks ok to me, but it would be great if Ricardo could have a look
as well.
Will
More information about the linux-arm-kernel
mailing list