[PATCH] KVM: arm64: Fix unaligned addr case in mmu walking

Justin He Justin.He at arm.com
Thu Mar 4 00:46:55 GMT 2021


Hi Marc

> -----Original Message-----
> From: Will Deacon <will at kernel.org>
> Sent: Thursday, March 4, 2021 5:13 AM
> To: Marc Zyngier <maz at kernel.org>
> Cc: Justin He <Justin.He at arm.com>; kvmarm at lists.cs.columbia.edu; James
> Morse <James.Morse at arm.com>; Julien Thierry <julien.thierry.kdev at gmail.com>;
> Suzuki Poulose <Suzuki.Poulose at arm.com>; Catalin Marinas
> <Catalin.Marinas at arm.com>; Gavin Shan <gshan at redhat.com>; Yanan Wang
> <wangyanan55 at huawei.com>; Quentin Perret <qperret at google.com>; linux-arm-
> kernel at lists.infradead.org; linux-kernel at vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > From: Jia He <justin.he at arm.com>
> > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> >
> > When walking the page tables at a given level, and if the start
> > address for the range isn't aligned for that level, we propagate
> > the misalignment on each iteration at that level.
> >
> > This results in the walker ignoring a number of entries (depending
> > on the original misalignment) on each subsequent iteration.
> >
> > Properly aligning the address at the before the next iteration
> 
> "at the before the next" ???
> 
> > addresses the issue.
> >
> > Cc: stable at vger.kernel.org
> > Reported-by: Howard Zhang <Howard.Zhang at arm.com>
> > Signed-off-by: Jia He <justin.he at arm.com>
> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> infrastructure")
> > [maz: rewrite commit message]
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 4d177ce1d536..124cd2f93020 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> >  		goto out;
> >
> >  	if (!table) {
> > -		data->addr += kvm_granule_size(level);
> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));

What if previous data->addr is already aligned with kvm_granule_size(level)?
Hence a deadloop? Am I missing anything else?

--
Cheers,
Justin (Jia He)

> >  		goto out;
> >  	}
> 
> If Jia is happy with it, please feel free to add:
> 
> Acked-by: Will Deacon <will at kernel.org>
> 
> Will



More information about the linux-arm-kernel mailing list