Reducing the size of the dump file/speeding up collection

qiaonuohan qiaonuohan at cn.fujitsu.com
Thu Sep 17 19:38:47 PDT 2015


On 09/17/2015 02:32 PM, Nikolay Borisov wrote:
> Hi Qiao,
>
> Thanks for the reply. So far I haven't been using the the compression
> feature of makedumpfile. But I want to ask if anything wouldn't
> compression make the dump process slower since in addition to having to
> write the dump to disk it also has to compress it which would put more
> strain on the cpu. Also, which part of the dump process is the bottleneck:
>
> - Reading from /proc/vmcore - that has mmap support so should be fairly
> fast?
> - Discarding unnecessary pages as memory is being scanned?
> - Writing/compressing content to disk?

I cannot recall percentage of each part. But writing/compression takes most
of the time

1. mmap is used for reading faster
2. --split is used to split the dump task into several processes, so compressing
    and writing will be speeded up.
3. multiple-thread is another option for speeding up compressing, it is a recently
    committed patch, so you cannot find it in the master branch, checkout devel branch
    or find it here:

http://sourceforge.net/p/makedumpfile/code/commit_browser

Make makedumpfile available to read and compress pages parallelly.

>
> Regards,
> Nikolay
>
> On 09/17/2015 06:27 AM, qiaonuohan wrote:
>> On 09/16/2015 04:30 PM, Nikolay Borisov wrote:
>>> Hello,
>>>
>>> I've been using makedumpfile as the crash collector with the -d31
>>> parameter. The machine this is being run on usually have 128-256GB of
>>> ram and the resulting crash dumps are in the range of 14-20gb which is
>>> very bug for the type of analysis I'm usually performing on crashed
>>> machine. I was wondering whether there is a way  to further reduce the
>>> size and the time to take the dump (now it takes around 25 minutes to
>>> collect such a dump). I've seen reports where people with TBs of ram
>>> take that long, meaning for a machine with 256gb it should be even
>>> faster. I've been running this configuration on kernels 3.12.28 and 4.1
>>> where mmap for the vmcore file is supported.
>>>
>>> Please advise
>>
>> Hi nikolay,
>>
>> Yes, this issue is what we are concerning a lot.
>> About the current situation, try --split, it will save time.
>>
>>
>> And lzo/snappy instead of zlib, these two compression format are faster
>> but need more space to save. Or if you still want zlib (to save space),
>> try multiple threads, check the following site, it will help you:
>>
>> https://lists.fedoraproject.org/pipermail/kexec/2015-September/002322.html
>>
>>
> .
>


-- 
Regards
Qiao Nuohan



More information about the kexec mailing list