[PATCH v4] Improve the performance of --num-threads -d 31

"Zhou, Wenjian/周文剑" zhouwj-fnst at cn.fujitsu.com
Sun Mar 27 18:23:23 PDT 2016


On 03/25/2016 10:57 AM, Atsushi Kumagai wrote:
> Hello,
>
> This is just a quick note to inform you.
> I measured the memory consumption with -d31 by VmHWM in
> /proc/PID/status and compared them between v3 and v4 since
> Minfei said the problem only occurs in v4.
>
>              |          VmHWM[kB]
> num-thread  |      v3            v4
> ------------+--------------------------
>       1      |    20,516        20,516
>       2      |    20,624        20,628
>       4      |    20,832        20,832
>       8      |    21,292        21,288
>      16      |    22,240        22,236
>      32      |    24,096        24,100
>      64      |    27,900        27,888
>
> According to this result, the problem we face seems not just
> any lack of memory issue.
>

Yes, I had realized it, for there isn't much difference between v3 and v4.
And it is hardly to some further investigation, until get Minfei's result.

BTW, can you reproduce the bug?

> BTW, the memory consumption increases depending on num-thread,
> I think it should be considered in the calculate_cyclic_buffer_size().
>

I will think about it.

-- 
Thanks
Zhou

>
> Thanks,
> Atsushi Kumagai
>
> diff --git a/makedumpfile.c b/makedumpfile.c
> index 4075f3e..d5626f9 100644
> --- a/makedumpfile.c
> +++ b/makedumpfile.c
> @@ -44,6 +44,14 @@ extern int find_vmemmap();
>
>   char filename_stdout[] = FILENAME_STDOUT;
>
> +void
> +print_VmHWM(void)
> +{
> +       char command[64];
> +       sprintf(command, "grep VmHWM /proc/%d/status", getpid());
> +       system(command);
> +}
> +
>   /* Cache statistics */
>   static unsigned long long      cache_hit;
>   static unsigned long long      cache_miss;
> @@ -11185,5 +11193,7 @@ out:
>          }
>          free_elf_info();
>
> +       print_VmHWM();
> +
>          return retcd;
>   }
>
>
>> Hi, Zhou.
>>
>> I'm on holiday now, you can ask other people to help test, if necessary.
>>
>> Thanks
>> Minfei
>>
>>> 在 2016年3月24日,12:29,Zhou, Wenjian/周文剑 <zhouwj-fnst at cn.fujitsu.com> 写道:
>>>
>>> Hello Minfei,
>>>
>>> How do these two patches work?
>>>
>>> --
>>> Thanks
>>> Zhou
>>>
>>>> On 03/18/2016 01:48 PM, "Zhou, Wenjian/周文剑" wrote:
>>>>> On 03/18/2016 12:16 PM, Minfei Huang wrote:
>>>>>> On 03/18/16 at 10:46am, "Zhou, Wenjian/周文剑" wrote:
>>>>>> Hello Minfei,
>>>>>>
>>>>>> Since I can't produce the bug, I reviewed the patch and wrote an increment patch.
>>>>>> Though there are some bugs in the increment patch,
>>>>>> I wonder if the previous bug still exists with this patch.
>>>>>> Could you help me confirm it?
>>>>>
>>>>> Ok. I will help verify this increasing patch.
>>>>
>>>> Thank you very much.
>>>>
>>>>>>
>>>>>> And I have another question.
>>>>>> Did it only occur in patch v4?
>>>>>
>>>>> This issue doesn't exist in v3. I have pasted the test result with
>>>>> --num-thread 32 in that thread.
>>>>>
>>>>> applied makedumpfile with option -d 31 --num-threads 32
>>>>> real    3m3.533s
>>>>
>>>> Oh, then the patch in the previous mail may not work.
>>>>
>>>> I'm appreciated if you can also test the patch in this letter.
>>>>
>>>> I introduced semaphore to fix the bug in the v3.
>>>> So I want to know if it is this which affects the result.
>>>> The attached patch is based on v4, used to remove semaohore.
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> kexec mailing list
>>>> kexec at lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/kexec
>>>
>>>
>> _______________________________________________
>> kexec mailing list
>> kexec at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/kexec
>
>





More information about the kexec mailing list