[PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area

Catalin Marinas catalin.marinas at arm.com
Mon Feb 1 05:41:02 PST 2016


On Mon, Feb 01, 2016 at 01:27:59PM +0100, Ard Biesheuvel wrote:
> On 1 February 2016 at 13:24, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > On Mon, Feb 01, 2016 at 11:54:52AM +0100, Ard Biesheuvel wrote:
> >> --- a/arch/arm64/mm/mmu.c
> >> +++ b/arch/arm64/mm/mmu.c
> >> @@ -53,6 +53,10 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
> >>  unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
> >>  EXPORT_SYMBOL(empty_zero_page);
> >>
> >> +static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
> >> +static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss;
> >> +static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss;
> >
> > I applied a fixup locally to keep the compiler quiet:
> >
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -57,8 +57,8 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_
> >  EXPORT_SYMBOL(empty_zero_page);
> >
> >  static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
> > -static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss;
> > -static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss;
> > +static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
> > +static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
> 
> Ah yes, I dropped a memblock_free() against bm_pud in
> early_fixmap_init(), since it occurred before the actual reservation,
> so bm_pud may never be referenced. For bm_pmd, it should not be
> required afaict.
> 
> If you prefer, I can keep the original code here:
> 
> #if CONFIG_PGTABLE_LEVELS > 2
> static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss;
> #endif
> #if CONFIG_PGTABLE_LEVELS > 3
> static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss;
> #endif

Looking at the CodingStyle doc, __maybe_unused is preferred, so I'll
just keep the fixup.

-- 
Catalin



More information about the linux-arm-kernel mailing list