[PATCH v4] Improve the performance of --num-threads -d 31

"Zhou, Wenjian/周文剑" zhouwj-fnst at cn.fujitsu.com
Fri Apr 1 04:21:45 PDT 2016


On 04/01/2016 02:27 PM, Minfei Huang wrote:
> On 03/31/16 at 05:09pm, "Zhou, Wenjian/周文剑" wrote:
>> Hello Minfei,
>>
>> Thanks for your results.
>> And I have some questions.
>>
>> On 03/31/2016 04:38 PM, Minfei Huang wrote:
>>> Hi, Zhou.
>>>
>>> I have tested the increasing patch on 4T memory machine.
>>>
>>> makedumpfile fails to dump vmcore, if there are about 384M memory in 2nd
>>> kernel which is reserved by crashkernel=auto. But once the reserved
>>> memory is enlarged up to 10G, makedumpfile can dump vmcore successfully.
>>>
>>
>> Will it fail with patch v3? or just v4?
>
> Both v3 and v4 can work well, once reserved memory is enlarged manually.
>
>> I don't think it is a problem.
>> If 128 cpus are enabled in second kernel, there won't be much memory left if total memory is 384M.
>
> Enable 128 CPUs with 1GB reserved memory.
> kdump:/# /sysroot/bin/free -m
>                total        used        free      shared  buff/cache   available
> Mem:            976          97         732           6         146         774
>
> Enable 1 CPU with 1GB reserved memory.
> kdump:/# /sysroot/bin/free -m
>                total        used        free      shared  buff/cache   available
> Mem:            991          32         873           6          85         909
>
> Extra enabled 127 CPUs will consume 65MB. So I think it is acceptable
> in kdump kernel.
>
> The major memory is consumed by makedumpfile from the test result.
> crashkernel=auto doesn't work any more, if option --num-threads is
> set. Even more, there is no warning to let user enlarge the reserved
> memory.
>

Yes, we should remind user if they want to use too much threads.

>>
>> And I think it will also work if the reserved memory is set to 1G.
>
> Yes, makedumpfile can work well under 1GB reserved memory.
>
>>
>>> The cache should be dropped before testing, otherwise makedumpfile will
>>> fail to dump vmcore.
>>> echo 3 > /proc/sys/vm/drop_caches
>>> Maybe there is something cleanup we can do to avoid this.
>>>
>>> Following is the result with different parameter for option
>>> --num-threads.
>>>
>>> makedumpfile -l --num-threads 128 --message-level 1 -d 31 /proc/vmcore a.128
>>> real    5m34.116s
>>> user    103m42.531s
>>> sys 86m12.586s
> [ snip ]
>>> makedumpfile -l --num-threads 0 --message-level 1 -d 31 /proc/vmcore a.0
>>> real    3m46.531s
>>> user    3m29.371s
>>> sys 0m16.909s
>>>
>>> makedumpfile.back -l --message-level 1 -d 31 /proc/vmcore a
>>> real    3m55.712s
>>> user    3m39.254s
>>> sys 0m16.287s
>>>
>>> Once the reserved memory is enlarged, makedumpfile works well with or
>>> without this increaseing patch.
>>>
>>> But there is an another issue I found during testing. makedumpfile may
>>> hang in about 24%. And with option --num-threads 64, this issue is also
>>> occured.
>>>
>>
>> Will it occur with patch v3?
>> If it not occurs, then neither of the previous two increasing patches will work?
>>
>> And did you test it with or without the increasing patch?
>
> without this increasing patch, v4 works well.
>

Do you mean makedumpfile won't hang without the increasing patch?

-- 
Thanks
Zhou
>>
>>> makedumpfile -l --num-threads 128 --message-level 1 -d 31 /proc/vmcore a.128
>>> Excluding unnecessary pages        : [100.0 %] |
>>> Excluding unnecessary pages        : [100.0 %] /
>>> Excluding unnecessary pages        : [100.0 %] -
>>> Copying data                       : [ 11.2 %] |
>>> Copying data                       : [ 12.4 %] -
>>> Excluding unnecessary pages        : [100.0 %] \
>>> Excluding unnecessary pages        : [100.0 %] |
>>> Copying data                       : [ 23.6 %] -
>>> Copying data                       : [ 24.4 %] /
>>>
>>
>> Could you help me find which line of the code is running at when it hanging?
>> makedumpfile may be in a loop and can't go out by some bugs.
>
> This issue happens very occasionally. I can update it once meet it.
>
> Thanks
> Minfei
>
>








More information about the kexec mailing list