[PATCH 2/3] arm64: efi: Ensure efi_create_mapping() does not map overlapping regions

Ard Biesheuvel ard.biesheuvel at linaro.org
Mon Jun 6 14:18:14 PDT 2016


On 6 June 2016 at 19:42, Catalin Marinas <catalin.marinas at arm.com> wrote:
> On Mon, Jun 06, 2016 at 06:09:50PM +0100, Catalin Marinas wrote:
>> On Mon, Jun 06, 2016 at 11:43:01AM +0200, Ard Biesheuvel wrote:
>> > On 31 May 2016 at 17:14, Catalin Marinas <catalin.marinas at arm.com> wrote:
>> > > diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
>> > > index 78f52488f9ff..0d5753c31c7f 100644
>> > > --- a/arch/arm64/kernel/efi.c
>> > > +++ b/arch/arm64/kernel/efi.c
>> > > @@ -62,10 +62,26 @@ struct screen_info screen_info __section(.data);
>> > >  int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
>> > >  {
>> > >         pteval_t prot_val = create_mapping_protection(md);
>> > > +       phys_addr_t length = md->num_pages << EFI_PAGE_SHIFT;
>> > > +       efi_memory_desc_t *next = md;
>> > >
>> > > -       create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
>> > > -                          md->num_pages << EFI_PAGE_SHIFT,
>> > > -                          __pgprot(prot_val | PTE_NG));
>> > > +       /*
>> > > +        * Search for the next EFI runtime map and check for any overlap with
>> > > +        * the current map when aligned to PAGE_SIZE. In such case, defer
>> > > +        * mapping the end of the current range until the next
>> > > +        * efi_create_mapping() call.
>> > > +        */
>> > > +       for_each_efi_memory_desc_continue(next) {
>> > > +               if (!(next->attribute & EFI_MEMORY_RUNTIME))
>> > > +                       continue;
>> > > +               if (next->phys_addr < PAGE_ALIGN(md->phys_addr + length))
>> > > +                       length -= (md->phys_addr + length) & ~PAGE_MASK;
>> >
>> > This looks fishy. We could have more than two runtime regions sharing
>> > a 64 KB page frame, for instance, and the middle regions will have
>> > unaligned start and end addresses, and sizes smaller than 64 KB. This
>> > subtraction may wrap in that case, producing a bogus length.
>>
>> I don't think I get it. This subtraction is meant to reduce the length
>> of the current md so that it is page aligned, deferring the mapping of
>> the last page to the next efi_create_mapping() call. Note that there is
>> a "break" just after the hunk you quoted, so we only care about the next
>> runtime map.
>
> I think I get it now. If the md we are currently mapping is within a
> 64KB page *and* phys_addr unaligned, length can wrap through 0. I will
> update the patch.
>

Indeed.

Another thing I failed to mention is that the new Memory Attributes
table support may map all of the RuntimeServicesCode regions a second
time, but with a higher granularity, using RO for .text and .rodata
and NX for .data and .bss (and the PE/COFF header). Due to the higher
granularity, regions that were mapped using the contiguous bit the
first time around may be split into smaller regions. Your current code
does not address that case.

I wonder how the PT debug code deals with this, and whether there is
anything we can reuse to inhibit cont and block mappings



More information about the linux-arm-kernel mailing list