Reducing the size of the dump file/speeding up collection

qiaonuohan qiaonuohan at cn.fujitsu.com
Wed Sep 16 20:27:39 PDT 2015


On 09/16/2015 04:30 PM, Nikolay Borisov wrote:
> Hello,
>
> I've been using makedumpfile as the crash collector with the -d31
> parameter. The machine this is being run on usually have 128-256GB of
> ram and the resulting crash dumps are in the range of 14-20gb which is
> very bug for the type of analysis I'm usually performing on crashed
> machine. I was wondering whether there is a way  to further reduce the
> size and the time to take the dump (now it takes around 25 minutes to
> collect such a dump). I've seen reports where people with TBs of ram
> take that long, meaning for a machine with 256gb it should be even
> faster. I've been running this configuration on kernels 3.12.28 and 4.1
> where mmap for the vmcore file is supported.
>
> Please advise

Hi nikolay,

Yes, this issue is what we are concerning a lot.
About the current situation, try --split, it will save time.


And lzo/snappy instead of zlib, these two compression format are faster
but need more space to save. Or if you still want zlib (to save space),
try multiple threads, check the following site, it will help you:

https://lists.fedoraproject.org/pipermail/kexec/2015-September/002322.html


-- 
Regards
Qiao Nuohan

>
> Regards,
> Nikolay
>
> _______________________________________________
> kexec mailing list
> kexec at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>





More information about the kexec mailing list