[PATCH v2 00/10] makedumpfile: parallel processing

"Zhou, Wenjian/周文剑" zhouwj-fnst at cn.fujitsu.com
Mon Jul 6 06:19:13 PDT 2015


Hello Atsushi Kumagai,

I have tried a lot, and I think the big performance degradation only
occurs in special CPU.
I thought about two reasons, and I need your help to confirm which is
the real one.

The following tests will also be OK by using dumpfile instead of /proc/vmcore

Test 1: distinguish whether it is resulted by multi-threads.
apply patch: test1
command1: ./makedumpfile -c /proc/vmcore vmcore --num-threads 1
command2: ./makedumpfile -c /proc/vmcore vmcore --num-threads 8

better to do some test in -l too.
command1: ./makedumpfile -l /proc/vmcore vmcore
command2: ./makedumpfile -l /proc/vmcore vmcore --num-threads 1
command3: ./makedumpfile -l /proc/vmcore vmcore --num-threads 8

Test 2: distinguish whether it is resulted by doing compress in thread
2.1:
	apply patch: test2.1
	command: ./makedumpfile -c /proc/vmcore vmcore --num-threads 1
2.2:
	apply patch: test2.2
	command: ./makedumpfile -c /proc/vmcore vmcore --num-threads 1

Thanks a lot.

BTW, could you show me the cpu name, zlib version and glibc version ?

-- 
Thanks
Zhou Wenjian

On 06/30/2015 05:06 PM, Atsushi Kumagai wrote:
>> On 06/26/2015 03:49 PM, Atsushi Kumagai wrote:
>>> I attached 5 processors to the VM and I confirmed that all threads
>>> consumed full cpu time by top(1) on the host:
>>>
>>>       PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>>     17614 qemu      20   0 5792m 4.9g 5652 R 435.1 72.8  29:02.17 qemu-kvm
>>>
>>> So I think the performance must be improved...
>>
>> Since I can't get that result in all machines here, could you test it with the patch:time
>> and show me the output?
>> Using "./makedumpfile -c --num-threads 4 /proc/vmcore dumpfile" is OK.
>>
>> The attachment is the patch time.
>
> Here is the result:
>
> / # makedumpfile -c --num-threads 4 /proc/vmcore /mnt/dumpfile
> Copying data                       : [100.0 %] |
> Copying data                       : [100.0 %] \
>
> The dumpfile is saved to /mnt/dumpfile.
>
> makedumpfile Completed.
> lock time: 310s935500us
> write time: 3s970037us
> hit time: 6s316043us
> find time: 317s926654us
> loop_time: 37s321800us
> thread consume_time: 0s0us
> thread timea: 0s0us
> thread timeb: 0s0us
> read_time[0]: 8s637011us
> lock_current_time[0]: 0s284428us
> found_time[0]: 60s366795us
> lock_consumed_time[0]: 2s782596us
> compress_time[0]: 301s427073us
> read_time[1]: 8s435914us
> lock_current_time[1]: 0s271680us
> found_time[1]: 60s329026us
> lock_consumed_time[1]: 2s849061us
> compress_time[1]: 302s98620us
> read_time[2]: 8s380550us
> lock_current_time[2]: 0s270388us
> found_time[2]: 60s209376us
> lock_consumed_time[2]: 3s297574us
> compress_time[2]: 301s486768us
> read_time[3]: 8s550662us
> lock_current_time[3]: 0s278997us
> found_time[3]: 60s476702us
> lock_consumed_time[3]: 3s49184us
> compress_time[3]: 301s718390us
> count1: 172
> count2: 70921401
> count3: 0
> count4: 0
> count5: 0
> count6: 0
> count7: 0
> exec time: 380s125494us
>
>
> BTW, I fixed a small mistake before testing like:
>
> - write_time = (write2.tv_sec - write1.tv_sec) * 1000000 + (write2.tv_usec - write1.tv_usec);
> + write_time += (write2.tv_sec - write1.tv_sec) * 1000000 + (write2.tv_usec - write1.tv_usec);
>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: test1
URL: <http://lists.infradead.org/pipermail/kexec/attachments/20150706/5100b882/attachment-0001.ksh>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test2.1
Type: application/x-troff-man
Size: 8159 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/kexec/attachments/20150706/5100b882/attachment-0001.1>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test2.2
Type: application/x-troff-man
Size: 8501 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/kexec/attachments/20150706/5100b882/attachment-0001.2>


More information about the kexec mailing list