makedumpfile memory usage grows with system memory size

Don Zickus dzickus at redhat.com
Fri May 11 09:26:01 EDT 2012


On Fri, May 11, 2012 at 10:19:52AM +0900, Atsushi Kumagai wrote:
> Hello, 
> 
> On Fri, 27 Apr 2012 08:52:14 -0400
> Don Zickus <dzickus at redhat.com> wrote:
> 
> [..]
> > > I tested the prototype based on _count and the other based on _mapcount.
> > > So, the former didn't work as expected while the latter worked fine.
> > > (The former excluded some used pages as free pages.)
> > > 
> > > As a next step, I measured performance of the prototype based on _mapcount,
> > > please see below.
> > 
> > Thanks for this work.  I assume this work just switches the free page
> > referencing and does not attempt to try and cut down on the memory usage
> > (I guess that would be the next step if using mapcount is acceptable)?
> 
> Thank you for your reply, Don, Vivek.
> 
> As Don said, I tried to change the method to exclude free pages and
> planed to resolve the memory consumption issue after it, because
> parsing free list repeatedly may cause a performance issue.
> 
> However, I'm thinking that to fix the size of memory consumption is more
> important than to resolve a performance issue for large system.
> 
> So I'm afraid that I would like to change the plan as:
> 
>   1. Implement "iterating filtering processing" to fix the size of memory
>      consumption. At this stage, makedumpfile will parse free list repeatedly
>      even though it may cause a performance issue.
>      
>   2. Take care of the performance issue after the 1st step.

Hello Atsushi-san,

Hmm.  The problem with the free list is that the addresses are in random
order, hence the reason to parse it repeatedly, correct?

I figured, now that you have a solution to parse the addresses in a linear
way (the changes you made a couple of weeks ago), you would just continue
with that.  With that complete, we can look at the performance issues and
solve them then.

But it is up to you.  You are willing to do the work, so I will defer to
your judgement on how best to proceed. :-)

Cheers,
Don

> 
> 
> Thanks
> Atsushi Kumagai
> 
> > 
> > > 
> > > 
> > > Performance Comparison:
> > > 
> > >   Explanation:
> > >     - The new method supports 2.6.39 and later, and it needs vmlinux.
> > > 
> > >     - Now, the prototype doesn't support PG_buddy because the value of PG_buddy
> > >       is different depending on kernel configuration and it isn't stored into 
> > >       VMCOREINFO. However, I'll extend get_length_of_free_pages() for PG_buddy 
> > >       when the value of PG_buddy is stored into VMCOREINFO.
> > > 
> > >     - The prototype has dump_level "32" to use new method, but I don't think
> > >       to extend dump_level for official version.
> > > 
> > >   How to measure:
> > >     I measured execution times with vmcore of 5GB in below cases with 
> > >     attached patches.
> > > 
> > >       - dump_level 16: exclude only free pages with the current method
> > >       - dump_level 31: exclude all excludable pages with the current method
> > >       - dump_level 32: exclude only free pages with the new method
> > >       - dump_level 47: exclude all excludable pages with the new method
> > > 
> > >   Result:
> > >      ------------------------------------------------------------------------
> > >      dump_level	     size [Bytes]    total time	   d_all_time     d_new_time	
> > >      ------------------------------------------------------------------------
> > >      	16		431864384	28.6s	     4.19s	      0s
> > >      	31		111808568	14.5s	      0.9s	      0s
> > >      	32		431864384	41.2s	     16.8s	   0.05s
> > >      	47		111808568	31.5s	     16.6s	   0.05s
> > >      ------------------------------------------------------------------------
> > > 
> > >   Discussion:
> > >     I think the new method can be used instead of the current method in many cases.
> > >     (However, the result of dump_level 31 looks too fast, I'm researching why
> > >     the case can execute so fast.)
> > > 
> > >     I would like to get your opinion.
> > 
> > I am curious.  Looking through your patches, it seems d_all_time's
> > increase in time should be from the new method because the if-statement is
> > setup to only accept the new method.  Therefore I was expecting d_new_time
> > for the new method when added to d_all_time for the current method would
> > come close to d_all_time for the new method.  IOW I would have expected
> > the extra 10-12 seconds from the new method to be found in d_new_time.
> > 
> > However, I do not see that.  d_new_time hardly increases at all.  So what
> > is accounting for the increase in d_all_time for the new method?
> > 
> > Thanks,
> > Don
> > 
> > _______________________________________________
> > kexec mailing list
> > kexec at lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec



More information about the kexec mailing list