[PATCH 2/4] KVM: arm64: Simplify the sanitise_mte_tags() logic

Catalin Marinas catalin.marinas at arm.com
Thu Sep 1 03:42:00 PDT 2022


On Fri, Jul 08, 2022 at 04:00:01PM -0700, Peter Collingbourne wrote:
> On Tue, Jul 5, 2022 at 7:26 AM Catalin Marinas <catalin.marinas at arm.com> wrote:
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 9cfa516452e1..35850f17ae08 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
> >   * - mmap_lock protects between a VM faulting a page in and the VMM performing
> >   *   an mprotect() to add VM_MTE
> >   */
> > -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> > -                            unsigned long size)
> > +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> > +                             unsigned long size)
> >  {
> >         unsigned long i, nr_pages = size >> PAGE_SHIFT;
> >         struct page *page;
> 
> Did you intend to change this to "struct page *page =
> pfn_to_page(pfn);"? As things are, I get a kernel panic if I try to
> start a VM with MTE enabled. The VM boots after making my suggested
> change though.

Yes, indeed. I think you fixed it when reposting together with the other
patches.

Sorry for the delay, too much holiday this summer ;).

-- 
Catalin



More information about the linux-arm-kernel mailing list