kexec+kdump + vmcoreinfo patch

Neil Horman nhorman at redhat.com
Mon Aug 27 13:55:57 EDT 2007


On Mon, Aug 27, 2007 at 10:28:27AM -0700, Randy Dunlap wrote:
> On Mon, 27 Aug 2007 13:02:53 -0400 Don Zickus wrote:
> 
> > On Mon, Aug 27, 2007 at 12:55:38PM -0400, Neil Horman wrote:
> > > > crashkernel=64M at 16M
> > > > 
> > > Hmm, well that should be enough.  looking at your log, it appears as though you
> > > have a 512k chunk allocatable, which should seem sufficient to go on a little
> > > longer.  The fact that your not seems to indicate that you are allocating a
> > > suspiciously large single chunk of ram.  I'd configure your initramfs to drop to
> > > a shell prompt before it sets up lp0.  Then you can step through the actions of
> > > the init script and monitor the contents of /proc/slabinfo to get an idea of
> > > whats eating up all your lowmem prior to the oom kill.
> > > 
> > > Neil
> > > 
> > 
> > <snipped from Randy's output>
> > 
> > Out of memory: kill process 680 (boot) score 309 or a child
> > Killed process 701 (S01boot.udev)
> > 
> > Heh, it's udev again.  What a surprise! </sarcasm>
> 
> Right, that's hardly a surprise.  :(
> 
> > Neil, didn't we solve this with an init 1 or something (ignoring the whole
> > busybox thing).
> 
> I was trying to boot to runlevel 3.  I can just boot to runlevel 1
> for a workaround (I think).
> 
Don is correct.  If we don't capture a dump from the initramfs, I mount the root
fs and run init.  We do wind up booting to Run Level 3, but the kdump script
runs early enough that I don't think udevd has a chance to start, and after we
capture a dump we immediately reboot, so I wouldn't expect to see any udevd
problems here.

Regards
Neil

> Thanks.
> 
> ---
> ~Randy
> *** Remember to use Documentation/SubmitChecklist when testing your code ***

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *Red Hat, Inc.
 *nhorman at redhat.com
 *gpg keyid: 1024D / 0x92A74FA1
 *http://pgp.mit.edu
 ***************************************************/



More information about the kexec mailing list