makedumpfile memory usage grows with system memory size

Ken'ichi Ohmichi oomichi at mxs.nes.nec.co.jp
Thu Mar 29 04:09:18 EDT 2012


Hi Don-san,

On Wed, 28 Mar 2012 17:22:04 -0400
Don Zickus <dzickus at redhat.com> wrote:
> 
> I was talking to Vivek about kdump memory requirements and he mentioned
> that they vary based on how much system memory is used.
> 
> I was interested in knowing why that was and again he mentioned that
> makedumpfile needed lots of memory if it was running on a large machine
> (for example 1TB of system memory).
> 
> Looking through the makedumpfile README and using what Vivek remembered of
> makedumpfile, we gathered that as the number of pages grows, the more
> makedumpfile has to temporarily store the information in memory.  The
> possible reason was to calculate the size of the file before it was copied
> to its final destination?

makedumpfile uses the system memory of 2nd-kernel for a bitmap if RHEL.
The bitmap represents each page of 1st-kernel is excluded or not.
So the bitmap size depends on 1st-kernel's system memory.

makedumpfile creates a file /tmp/kdump_bitmapXXXXXX as the bitmap,
and the file is created on 2nd-kernel's memory if RHEL, because
RHEL does not mount a root filesystem when 2nd-kernel is running.


> I was curious if that was true and if it was, would it be possible to only
> process memory in chunks instead of all at once.
> 
> The idea is that a machine with 4Gigs of memory should consume the same
> the amount of kdump runtime memory as a 1TB memory system.
> 
> Just trying to research ways to keep the memory requirements consistent
> across all memory ranges.

I think the above purpose is good, and I don't have any idea for reducing
the bitmap size. And now I am out of makedumpfile development.
Kumagai-san is the makedumpfile maintainer now, and he will help you.


Thanks
Ken'ichi Ohmichi



More information about the kexec mailing list