[PATCH 2/3] arm64: efi: Ensure efi_create_mapping() does not map overlapping regions

Catalin Marinas catalin.marinas at arm.com
Wed Jun 29 02:39:38 PDT 2016


On Tue, Jun 28, 2016 at 06:12:22PM +0200, Ard Biesheuvel wrote:
> On 28 June 2016 at 18:05, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > (Restarting the thread before I forget the entire discussion)
> >
> > On Mon, Jun 06, 2016 at 11:18:14PM +0200, Ard Biesheuvel wrote:
> >> >> > On 31 May 2016 at 17:14, Catalin Marinas <catalin.marinas at arm.com> wrote:
> >> >> > > diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> >> >> > > index 78f52488f9ff..0d5753c31c7f 100644
> >> >> > > --- a/arch/arm64/kernel/efi.c
> >> >> > > +++ b/arch/arm64/kernel/efi.c
> >> >> > > @@ -62,10 +62,26 @@ struct screen_info screen_info __section(.data);
> >> >> > >  int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
> >> >> > >  {
> >> >> > >         pteval_t prot_val = create_mapping_protection(md);
> >> >> > > +       phys_addr_t length = md->num_pages << EFI_PAGE_SHIFT;
> >> >> > > +       efi_memory_desc_t *next = md;
> >> >> > >
> >> >> > > -       create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
> >> >> > > -                          md->num_pages << EFI_PAGE_SHIFT,
> >> >> > > -                          __pgprot(prot_val | PTE_NG));
> >> >> > > +       /*
> >> >> > > +        * Search for the next EFI runtime map and check for any overlap with
> >> >> > > +        * the current map when aligned to PAGE_SIZE. In such case, defer
> >> >> > > +        * mapping the end of the current range until the next
> >> >> > > +        * efi_create_mapping() call.
> >> >> > > +        */
> >> >> > > +       for_each_efi_memory_desc_continue(next) {
> >> >> > > +               if (!(next->attribute & EFI_MEMORY_RUNTIME))
> >> >> > > +                       continue;
> >> >> > > +               if (next->phys_addr < PAGE_ALIGN(md->phys_addr + length))
> >> >> > > +                       length -= (md->phys_addr + length) & ~PAGE_MASK;
> > [...]
> >> Another thing I failed to mention is that the new Memory Attributes
> >> table support may map all of the RuntimeServicesCode regions a second
> >> time, but with a higher granularity, using RO for .text and .rodata
> >> and NX for .data and .bss (and the PE/COFF header).
> >
> > Can this not be done in a single go without multiple passes? That's what
> > we did for the core arm64 code, the only one left being EFI run-time
> > mappings.
> 
> Well, we probably could, but it is far from trivial.
> 
> >> Due to the higher
> >> granularity, regions that were mapped using the contiguous bit the
> >> first time around may be split into smaller regions. Your current code
> >> does not address that case.
> >
> > If the above doesn't work, the only solution would be to permanently map
> > these ranges as individual pages, no large blocks.
> 
> That is not unreasonable, since regions >2MB are unusual.

We'll have the contiguous bit supported at some point and we won't be
able to use it for EFI run-time mappings. But I don't think that's
essential, minor improvement on a non-critical path.

I'll post some patches to always use PAGE_SIZE granularity for EFI
run-time mappings.

-- 
Catalin



More information about the linux-arm-kernel mailing list