makedumpfile memory usage grows with system memory size

Don Zickus dzickus at redhat.com
Thu Mar 29 09:25:33 EDT 2012


Hello Daisuke,

On Thu, Mar 29, 2012 at 09:56:46PM +0900, HATAYAMA Daisuke wrote:
> Hello Don,
> 
> I'm missing your mail somehow so replying Oomichi-san's mail...
> 
> From: "Ken'ichi Ohmichi" <oomichi at mxs.nes.nec.co.jp>
> Subject: Re: makedumpfile memory usage grows with system memory size
> Date: Thu, 29 Mar 2012 17:09:18 +0900
> 
> > 
> > On Wed, 28 Mar 2012 17:22:04 -0400
> > Don Zickus <dzickus at redhat.com> wrote:
> 
> >> I was curious if that was true and if it was, would it be possible to only
> >> process memory in chunks instead of all at once.
> >> 
> >> The idea is that a machine with 4Gigs of memory should consume the same
> >> the amount of kdump runtime memory as a 1TB memory system.
> >> 
> >> Just trying to research ways to keep the memory requirements consistent
> >> across all memory ranges.
> 
> I think this is possible in constant memory space by creating bitmaps
> and writing pages in a certain amount of memory. That is, if choosing
> 4GB, do [0, 4GB) space processing, [4GB, 8GB) space processing, [8GB,
> 12GB) ... in order. The key is to restrict the target memory range of
> filtering.

Yes, that was what I was thinking.  I am glad to hear that is possible.
Is there some place in the code that I can help try out that idea?  I
would also be curious if there is a 'time' impact on how long it takes to
process this (for example, would it add a couple of milliseconds overhead
or seconds overhead).

Thanks,
Don



More information about the kexec mailing list