[PATCH] arm64: mm: set ZONE_DMA size based on early IORT scan

Catalin Marinas catalin.marinas at arm.com
Mon Oct 12 11:49:55 EDT 2020


On Mon, Oct 12, 2020 at 04:19:08PM +0200, Ard Biesheuvel wrote:
> On Mon, 12 Oct 2020 at 13:24, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > On Mon, Oct 12, 2020 at 12:43:05PM +0200, Ard Biesheuvel wrote:
> > > On Mon, 12 Oct 2020 at 11:30, Ard Biesheuvel <ardb at kernel.org> wrote:
> > > > On Mon, 12 Oct 2020 at 11:28, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > > > > On Sat, Oct 10, 2020 at 11:31:53AM +0200, Ard Biesheuvel wrote:
> > > > > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > > > > > index f0599ae73b8d..829fa63c3d72 100644
> > > > > > --- a/arch/arm64/mm/init.c
> > > > > > +++ b/arch/arm64/mm/init.c
> > > > > > @@ -191,6 +191,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> > > > > >       unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
> > > > > >
> > > > > >  #ifdef CONFIG_ZONE_DMA
> > > > > > +     if (IS_ENABLED(CONFIG_ACPI)) {
> > > > > > +             extern unsigned int acpi_iort_get_zone_dma_size(void);
> > > > >
> > > > > Nitpick: can we add this prototype to include/linux/acpi_iort.h?
> > > > >
> > > > > > +
> > > > > > +             zone_dma_bits = min(zone_dma_bits,
> > > > > > +                                 acpi_iort_get_zone_dma_size());
> > > > > > +             arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
> > > > > > +     }
> > > > > > +
> > > > > >       max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
> > > > >
> > > > > I think we should initialise zone_dma_bits slightly earlier via
> > > > > arm64_memblock_init(). We'll eventually have reserve_crashkernel()
> > > > > called before this and it will make use of arm64_dma_phys_limit for
> > > > > "low" reservations:
> > > > >
> > > > > https://lore.kernel.org/linux-arm-kernel/20200907134745.25732-7-chenzhou10@huawei.com/
> > > > >
> > > >
> > > > We don't have access to the ACPI tables yet at that point.
> > >
> > > Also, could someone give an executive summary of why it matters where
> > > the crashkernel is loaded? As far as I can tell, reserve_crashkernel()
> > > only allocates memory for the kernel's executable image itself, which
> > > can usually be loaded anywhere in memory. I could see how a
> > > crashkernel might need some DMA'able memory if it needs to use the
> > > hardware, but I don't think that is what is going on here.
> >
> > I thought the crashkernel needs some additional reserved RAM as well to
> > be able to run. It should not touch the original kernel's memory as it
> > usually needs to dump it.
> 
> Looking at the code, it is definitely allocating memory for the kernel
> itself (as it refers to the 2 MB alignment requirement), and given
> that we used to require the kernel to be at the base of the linear
> region to even be able to access all of memory, I suspect that we
> might be able to relax this requirement. Not sure what that means for
> the userland tools, though.

The 2MB is an interpretation of booting.txt that the DRAM must start at
this alignment (not sure what we do these days, in lots of
configurations we just use 4K pages for the linear map).

However, the crashkernel=... range is meant for sufficiently large
reservation to be able to run the kdump kernel, not just load the image.

-- 
Catalin



More information about the linux-arm-kernel mailing list