[PATCH 2/3] arm64: efi: Ensure efi_create_mapping() does not map overlapping regions
Catalin Marinas
catalin.marinas at arm.com
Tue Jun 28 09:05:28 PDT 2016
(Restarting the thread before I forget the entire discussion)
On Mon, Jun 06, 2016 at 11:18:14PM +0200, Ard Biesheuvel wrote:
> >> > On 31 May 2016 at 17:14, Catalin Marinas <catalin.marinas at arm.com> wrote:
> >> > > diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> >> > > index 78f52488f9ff..0d5753c31c7f 100644
> >> > > --- a/arch/arm64/kernel/efi.c
> >> > > +++ b/arch/arm64/kernel/efi.c
> >> > > @@ -62,10 +62,26 @@ struct screen_info screen_info __section(.data);
> >> > > int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
> >> > > {
> >> > > pteval_t prot_val = create_mapping_protection(md);
> >> > > + phys_addr_t length = md->num_pages << EFI_PAGE_SHIFT;
> >> > > + efi_memory_desc_t *next = md;
> >> > >
> >> > > - create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
> >> > > - md->num_pages << EFI_PAGE_SHIFT,
> >> > > - __pgprot(prot_val | PTE_NG));
> >> > > + /*
> >> > > + * Search for the next EFI runtime map and check for any overlap with
> >> > > + * the current map when aligned to PAGE_SIZE. In such case, defer
> >> > > + * mapping the end of the current range until the next
> >> > > + * efi_create_mapping() call.
> >> > > + */
> >> > > + for_each_efi_memory_desc_continue(next) {
> >> > > + if (!(next->attribute & EFI_MEMORY_RUNTIME))
> >> > > + continue;
> >> > > + if (next->phys_addr < PAGE_ALIGN(md->phys_addr + length))
> >> > > + length -= (md->phys_addr + length) & ~PAGE_MASK;
[...]
> Another thing I failed to mention is that the new Memory Attributes
> table support may map all of the RuntimeServicesCode regions a second
> time, but with a higher granularity, using RO for .text and .rodata
> and NX for .data and .bss (and the PE/COFF header).
Can this not be done in a single go without multiple passes? That's what
we did for the core arm64 code, the only one left being EFI run-time
mappings.
> Due to the higher
> granularity, regions that were mapped using the contiguous bit the
> first time around may be split into smaller regions. Your current code
> does not address that case.
If the above doesn't work, the only solution would be to permanently map
these ranges as individual pages, no large blocks.
> I wonder how the PT debug code deals with this, and whether there is
> anything we can reuse to inhibit cont and block mappings
Do you mean DEBUG_PAGEALLOC? When enabled, it defaults to single page
mappings for the kernel, no sections.
--
Catalin
More information about the linux-arm-kernel
mailing list