makedumpfile memory usage grows with system memory size

HATAYAMA Daisuke d.hatayama at jp.fujitsu.com
Thu Mar 29 08:56:46 EDT 2012


Hello Don,

I'm missing your mail somehow so replying Oomichi-san's mail...

From: "Ken'ichi Ohmichi" <oomichi at mxs.nes.nec.co.jp>
Subject: Re: makedumpfile memory usage grows with system memory size
Date: Thu, 29 Mar 2012 17:09:18 +0900

> 
> On Wed, 28 Mar 2012 17:22:04 -0400
> Don Zickus <dzickus at redhat.com> wrote:

>> I was curious if that was true and if it was, would it be possible to only
>> process memory in chunks instead of all at once.
>> 
>> The idea is that a machine with 4Gigs of memory should consume the same
>> the amount of kdump runtime memory as a 1TB memory system.
>> 
>> Just trying to research ways to keep the memory requirements consistent
>> across all memory ranges.

I think this is possible in constant memory space by creating bitmaps
and writing pages in a certain amount of memory. That is, if choosing
4GB, do [0, 4GB) space processing, [4GB, 8GB) space processing, [8GB,
12GB) ... in order. The key is to restrict the target memory range of
filtering.

Thanks.
HATAYAMA, Daisuke




More information about the kexec mailing list