makedumpfile memory usage grows with system memory size
Don Zickus
dzickus at redhat.com
Tue May 15 08:35:38 EDT 2012
On Tue, May 15, 2012 at 02:57:05PM +0900, Atsushi Kumagai wrote:
> Hello Don,
>
> On Fri, 11 May 2012 09:26:01 -0400
> Don Zickus <dzickus at redhat.com> wrote:
>
> > > Thank you for your reply, Don, Vivek.
> > >
> > > As Don said, I tried to change the method to exclude free pages and
> > > planed to resolve the memory consumption issue after it, because
> > > parsing free list repeatedly may cause a performance issue.
> > >
> > > However, I'm thinking that to fix the size of memory consumption is more
> > > important than to resolve a performance issue for large system.
> > >
> > > So I'm afraid that I would like to change the plan as:
> > >
> > > 1. Implement "iterating filtering processing" to fix the size of memory
> > > consumption. At this stage, makedumpfile will parse free list repeatedly
> > > even though it may cause a performance issue.
> > >
> > > 2. Take care of the performance issue after the 1st step.
> >
> > Hello Atsushi-san,
> >
> > Hmm. The problem with the free list is that the addresses are in random
> > order, hence the reason to parse it repeatedly, correct?
>
> Yes.
>
> > I figured, now that you have a solution to parse the addresses in a linear
> > way (the changes you made a couple of weeks ago), you would just continue
> > with that. With that complete, we can look at the performance issues and
> > solve them then.
> >
> > But it is up to you. You are willing to do the work, so I will defer to
> > your judgement on how best to proceed. :-)
>
> What I wanted to tell you was I want to resolve the memory consumption issue
> as soon as possible. In other words, I think the method to exclude free pages
> is not so important.
> I'll continue to work with the method which is easy to implement.
Ok. I look forward to your results. Thanks for your effort.
Cheers,
Don
More information about the kexec
mailing list