[PATCH 1/2] arm64: mm: detect bad __create_mapping uses

Mark Rutland mark.rutland at arm.com
Fri Nov 20 09:05:34 PST 2015


On Fri, Nov 20, 2015 at 04:13:28PM +0000, Steve Capper wrote:
> On 20 November 2015 at 15:35, Mark Rutland <mark.rutland at arm.com> wrote:
> > If a caller of __create_mapping provides a PA and VA which have
> > different sub-page offsets, it is not clear which offset they expect to
> > apply to the mapping, and is indicative of a bad caller.
> >
> > Disallow calls with differing sub-page offsets, and WARN when they are
> > encountered, so that we can detect and fix such cases.
> >
> > Signed-off-by: Mark Rutland <mark.rutland at arm.com>
> > Cc: Ard Biesheuvel <ard.biesheuvel at linaro.org>
> > Cc: Catalin Marinas <caralin.marinas at arm.com>
> > Cc: Laura Abbott <labbott at fedoraproject.org>
> > Cc: Steve Capper <steve.capper at linaro.org>
> > Cc: Will Deacon <will.deacon at arm.com>
> > ---
> >  arch/arm64/mm/mmu.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index e3f563c..3b06afa 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -300,6 +300,13 @@ static void  __create_mapping(struct mm_struct *mm, pgd_t *pgd,
> >  {
> >         unsigned long addr, length, end, next;
> >
> > +       /*
> > +        * If the virtual and physical address don't have the same offset
> > +        * within a page, we cannot map the region as the caller expects.
> > +        */
> > +       if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
> > +               return;
> > +
> 
> Do we want any subpage offsets in our virt and phys?

Yes.

For instance, when mapping EFI regions (which exist at 4K granularity)
on a 64K page kernel, per [1]. The spec requires that any regions in the
same 64K page can be mapped using the same attributes, so it's fine to
map at 64K granularity.

We already try to cater for this in __create_mapping where we fix up the
base and size:

	addr = virt & PAGE_MASK;
	length = PAGE_ALIGN(size + (virt & ~PAGE_MASK));

I will add a middle paragraph to the commit message:

	In some cases, the region we wish to map may validly have a
	sub-page offset in the physical and virtual adddresses. For
	example, EFI runtime regions have 4K granularity, yet may be
	mapped by a 64K page kernel. So long as the physical and virtual
	offsets are the same, the region will be mapped at the expected
	VAs.

Does that sound OK?

> If we have sub page bits in phys won't that start flipping on bits in
> the page table entries?

In all cases where we use phys, we either shift phys right by PAGE_SHIFT
(in __populate_init_pte vi __phys_to_pfn) which clears those bits, or we
first check it is sufficiently aligned for that level of table (and
hence the low bits are all zero). Patch 2 fixes a bug related to the
latter case.

> What about something like:
> if (WARN_ON((phys | virt) & ~PAGE_MASK))

Per the above, that would break the EFI code in [1].

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-November/386908.html



More information about the linux-arm-kernel mailing list