[PATCH v2 00/10] makedumpfile: parallel processing
Atsushi Kumagai
ats-kumagai at wm.jp.nec.com
Tue Jun 30 02:06:38 PDT 2015
>On 06/26/2015 03:49 PM, Atsushi Kumagai wrote:
>> I attached 5 processors to the VM and I confirmed that all threads
>> consumed full cpu time by top(1) on the host:
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 17614 qemu 20 0 5792m 4.9g 5652 R 435.1 72.8 29:02.17 qemu-kvm
>>
>> So I think the performance must be improved...
>
>Since I can't get that result in all machines here, could you test it with the patch:time
>and show me the output?
>Using "./makedumpfile -c --num-threads 4 /proc/vmcore dumpfile" is OK.
>
>The attachment is the patch time.
Here is the result:
/ # makedumpfile -c --num-threads 4 /proc/vmcore /mnt/dumpfile
Copying data : [100.0 %] |
Copying data : [100.0 %] \
The dumpfile is saved to /mnt/dumpfile.
makedumpfile Completed.
lock time: 310s935500us
write time: 3s970037us
hit time: 6s316043us
find time: 317s926654us
loop_time: 37s321800us
thread consume_time: 0s0us
thread timea: 0s0us
thread timeb: 0s0us
read_time[0]: 8s637011us
lock_current_time[0]: 0s284428us
found_time[0]: 60s366795us
lock_consumed_time[0]: 2s782596us
compress_time[0]: 301s427073us
read_time[1]: 8s435914us
lock_current_time[1]: 0s271680us
found_time[1]: 60s329026us
lock_consumed_time[1]: 2s849061us
compress_time[1]: 302s98620us
read_time[2]: 8s380550us
lock_current_time[2]: 0s270388us
found_time[2]: 60s209376us
lock_consumed_time[2]: 3s297574us
compress_time[2]: 301s486768us
read_time[3]: 8s550662us
lock_current_time[3]: 0s278997us
found_time[3]: 60s476702us
lock_consumed_time[3]: 3s49184us
compress_time[3]: 301s718390us
count1: 172
count2: 70921401
count3: 0
count4: 0
count5: 0
count6: 0
count7: 0
exec time: 380s125494us
BTW, I fixed a small mistake before testing like:
- write_time = (write2.tv_sec - write1.tv_sec) * 1000000 + (write2.tv_usec - write1.tv_usec);
+ write_time += (write2.tv_sec - write1.tv_sec) * 1000000 + (write2.tv_usec - write1.tv_usec);
Thanks
Atsushi Kumagai
More information about the kexec
mailing list