[PATCH RFC 00/11] makedumpfile: parallel processing

qiaonuohan qiaonuohan at cn.fujitsu.com
Sun Jun 14 18:59:40 PDT 2015


On 06/11/2015 11:47 AM, "Zhou, Wenjian/周文剑" wrote:
> hello,
>
> though --split can parallel process, it can't just produce one core.
> more processes, better performance. but it also means more split cores.
> people may want to just produce one core, however they still prefer parallel
> processing for its better performance.
>
> so, parallel processing by multiple threads is needed.
> in the future, multiple threads can also be used in each split process to
> accelerate process.
>
>
> On 06/10/2015 02:06 PM, Atsushi Kumagai wrote:
>> Hello Zhou,
>>

Hello Atsushi,

>>> This patch set implements parallel processing by means of multiple threads.
>>> With this patch set, it is available to use multiple threads to read
>>> and compress pages. This parallel process will save time.
>>> This feature only supports creating dumpfile in kdump-compressed format from
>>> vmcore in kdump-compressed format or elf format. Currently, sadump and
>>> xen kdump are not supported.
>>
>> makedumpfile already has a parallel processing feature (--split),
>> it parallelizes not only page compression but also disk i/o, so
>> I think --split includes what you want to do by this patch.
>>
>> In what case do you think this patch will be effective, what is
>> the advantage of this patch ?

Since commit 428a5e99eea929639ab9c761f33743f78a961b1a(kdumpctl: Pass
disable_cpu_apicid to kexec of capture kernel) has been merged. It is possible for
us to use multiple cpus in 2nd kernel.

Using multiple threads is trying to take advantage of multiple cpus in 2nd kernel.
Since memory becomes bigger and bigger, dumping spends more time. Why not take
advantage of multiple cpus?

OTOH, --split does a lot help to improve performance. But more processes
means more files, saving multiple files and managing those files is not that
convenient.

Multiple threads do have some merit in improving performance. And later, as zhou
said, we can also try to combine --split with multiple threads to save more time.

-- 
Regards
Qiao Nuohan

>>
>>
>> Thanks
>> Atsushi Kumagai
>>
>>>
>>> Qiao Nuohan (11):
>>>   Add readpage_kdump_compressed_parallel
>>>   Add mappage_elf_parallel
>>>   Add readpage_elf_parallel
>>>   Add read_pfn_parallel
>>>   Add function to initial bitmap for parallel use
>>>   Add filter_data_buffer_parallel
>>>   Add write_kdump_pages_parallel to allow parallel process
>>>   Add write_kdump_pages_parallel_cyclic to allow parallel process in
>>>     cyclic_mode
>>>   Initial and free data used for parallel process
>>>   Make makedumpfile available to read and compress pages parallelly
>>>   Add usage and manual about multiple threads process
>>>
>>> Makefile       |    2 +
>>> erase_info.c   |   29 +-
>>> erase_info.h   |    2 +
>>> makedumpfile.8 |   24 +
>>> makedumpfile.c | 1505 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>> makedumpfile.h |   79 +++
>>> print_info.c   |   16 +
>>> 7 files changed, 1652 insertions(+), 5 deletions(-)
>>>
>>>
>>> _______________________________________________
>>> kexec mailing list
>>> kexec at lists.infradead.org
>>> http://lists.infradead.org/mailman/listinfo/kexec
>
>




More information about the kexec mailing list