896MB address limit

Vivek Goyal vgoyal at redhat.com
Tue Sep 25 13:38:23 EDT 2012


On Mon, Sep 24, 2012 at 08:11:12PM -0700, Eric W. Biederman wrote:
> Cliff Wickman <cpw at sgi.com> writes:
> 
> > Gentlemen,
> >
> > In dumping very large memories we are running up against the 896MB 
> > limit in SLES11SP2 (3.0.38 kernel).
> 
> Odd.  That limit should be the maximum address in memory to load the
> crash kernel.  Tha limit should have nothing to do with the dump process
> itself.

This limit came from kernel. IIRC, we had a discussion with hpa and others
and came up with max addresses we could load kernel at for 32bit and
64bit. I wanted it to be exported through bzImage header, so that
kexec-tools does not have to hard code it but i guess it never happened.

> 
> Are you saying you need more that 512MiB reserved for the crash kernel
> to be able to dump all of the memory in your system?

Yes it can take more than 512MB (I think even case of 512MB is broken
with current upstream) for large memory system. Current dump filtering
utility takes 2bits of memory per 4K of page. So that is 64MB of memory 
per terabyte of RAM. With current initramfs size that requires us (This
is distro specific), to reserve 192MB for 1TB of system. So after 6TB
of RAM, we will cross 512MB of memory.

Having said that, makedumpfile people are working on trying to make
it work with fixed size buffers. A basic implementation is available
in version 1.5.0 but this has performance issues. One more set of
patches needs to go in and after that performance might be acceptable
on large machines.

So hopefully newer version of makedumpfile will do away with the needs
of reserving memory more than 512MB. So memory is traded off for little
higher dumping time. (I prefer that then memory reservation failing).

Thanks
Vivek



More information about the kexec mailing list