[2/3] 2.6.22-rc2: known regressions v2

Linus Torvalds torvalds at linux-foundation.org
Fri May 25 13:50:38 EDT 2007

On Fri, 25 May 2007, Andrew Morton wrote:
> > > There is an additional factor - dumps contain data which variously is -
> > > copyright third parties, protected by privacy laws, just personally
> > > private, security sensitive (eg browser history) and so on.
> > 
> > Yes. 
> We're uninterested in pagecache and user memory and they should be omitted
> from the image (making it enormously smaller too).

The people who would use crash-dumps (big sensitive firms) don't trust 

And they'd be right not to trust you. You end up having a _lot_ of 
sensitive data even if you avoid user memory and page cache. The network 
buffers, the dentries, and just stale data that hasn't been overwritten.

So if you end up having secure data on that machine, you should *never* 
send a dump to somebody you don't trust. For the financial companies 
(which are practically the only ones that would use dumps) there can even 
be legal reasons why they cannot do that!

> That leaves security keys and perhaps filenames, and these could probably
> be addressed.

It leaves almost every single kernel allocation, and no, it cannot be 

How are you going to clear out the network packets that you have in 
memory? They're just kmalloc'ed. 

> > I'm sure we've had one or two crashdumps over the years that have actually 
> > clarified a bug.
> > 
> > But I seriously doubt it is more than a handful. 
> We've had a few more than that, but all the ones I recall actually came
> from the kdump developers who were hitting other bugs and who just happened
> to know how to drive the thing.

Right, I don't dispute that some _developers_ might use dumping. I dispute 
that any user would practically ever use it.

And even for developers, I suspect it's _so_ far down the list of things 
you do, that it's practically zero.

> > But 99% of the time, the problem doesn't happen on a developer machine, 
> > and even if it does, 90% of the time you really just want the traceback 
> > and register info that you get out of an oops.
> Often we don't even get that: "I was in X and it didn't hit the logs".


> You can learn a hell of a lot by really carefully picking through kernel
> memory with gdb.

.. but you can learn equally much with other methods that do *not* involve 
the pain and suffering that is a kernel dump.

Setting up netconsole or the firewire tools is much easier. The firewire 
thing in particular is nice, because it doesn't actually rely on the 
target having to even know about it (other than enabling the "remote DMA 
access" thing once on bootup).

If you've ever picked through a kernel dump after-the-fact, I just bet you 
could have done equally well with firewire, and it would have had _zero_ 
impact on your kernel image. Now, contrast that with kdump, and ask 
yourself: which one do you think is worth concentrating effort on?

 - kdump: lots of code and maintenance effort, doesn't work if the CPU 
   locks up, requires a lot of learning to go through the dump.

 - firewire: zero code, no maintenance effort, works even if the CPU locks 
   up. Still does require the same learning to go through the end result.

Which one wins? I know which one I'll push.


More information about the linux-pcmcia mailing list