Accessing Thread Information in kernel crash dumps with ddd+gdb
pdelaney at bluelane.com
Fri Apr 18 16:07:08 EDT 2008
Vivek Goyal wrote:
>On Thu, Apr 17, 2008 at 05:16:55PM -0700, Piet Delaney wrote:
>>I've been using kgdb for a while with our 2.6.12 and now 2.6.16 kernel
>>as well as kdump/kexec with our 2.6.16 kernel. I'm a bit disappointed
>>with the visibility of local variables on the threads/tasks not currently
>>running on CPUs. Both crash, and the gdb macros that you guys wrote,
>>show the most important stuff but I'd prefer to be able to see everything
>>with gdb/ddd as I can with kgdb; including all local variables and formal
>>parameters at each stack frame.
>>A long time ago I used gdb on SunOS 4.1.4 and use to simply set $fp
>>and $sp from the saved information in the U-block to view a process.
>>I wish gdb would allow be to run your macros, btt for example, and extract
>>the stackp from task.thread.esp assign it temporally to $sp for the
>>do the backtrace command and see everything. Changing $sp and $fp for a
>>like I use to do with gdb on SunOS 4.1.4 and then using ddd+gdb to
>>stack formals and locals would be nice. Just doing a 'set write on'
>>gdb wants a process and I can't see to satisfy it with simply setting
>>I was wondering if any of you guys have been thinking of anything like this
>>and had and hacks or ideas on how to see the locals and formals for all
>>One thought I had was a minor hack of the kexec code to do something
>>like your gdb macros
>>and walk thru the task list and then append a ELF Notes, like done by
>>for each task. I have no idea if gdb has a limit on the number of
>>that can be provided. I suppose I'd leave it a KEXEC config variable to
>>enable this, as
>>some would argue that it's not as save as simply saving the regs for the
>>This would leave 'info threads' with gdb similar to 'ps' with crash and
>>to the experience with kgdb.
>IIUC, you are suggesting that we create elf notes even for non-active
>tasks in vmcore.
> We should not be doing that.
>- It is not safe to traverse through task list after system has crashed.
I agree it's not 100% safe, but for many developers it's a risk
worth taking. For example we make a lot of changes in the
TCP/IP stack for implementing a proxy for filtering network
traffic. Virtually all bugs are ones we make in the networking
code and having a precise view of each stack would be helpful.
It's extremely unlikely that the task structures have been whacked
in our case; and likely for many other developers.
>- We reserve the memory for elf notes at system boot. At that time we
> have no idea how many task system will have at the time of crash.
How about reserving memory with each task structure for the
ELF notes? With Solaris I reserved a page for each CPU to be
able to store the register windows during a stack overflow so
I could map it in during the stack overflow. It's not unreasonable
to allocate the space with the task; especially if it's a kernel
config option. If no one used it we could remove it and go for
an alternate approach.
>I think following can be a way forward for your requirement.
>- Either gdb should provide a SunOS kind of facility where one can
> provide stack pointer and switch the task context. ( I don't know
> if there is already a way to do that).
I'll talk on the gdb mailing list about it. Perhaps we could
initially get it in as a developer maintance function to allow
the registers to be changed for a thread in a core dump.
>- Or one can write a user space tool, which parses original vmcore,
> walks through task list, prepare elf notes for all the tasks and emit
> a new vmcore which is fetched to gdb.
Sounds difficult. I was talking to Dave Anderson about having
crash provide task info to his embedded gdb process but he
wasn't supportive of that approach.
Having a peace of code that has to walk thru the task list and
modify the elf notes seems a lot harder than just having the kernel
do it. More code that has to be maintained.
Isn't there precedence with other kernels like freebsd and Solaris
having the core dumps provide task information? If it was conditional
we could see just how many developers find the risk of causing a problem
during the crash dump a issue.
Looks to me like the size of the ELF notes is quite small and wouldn't
be a significant burden to add to the task structure.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the kexec