[PATCH] Makedumpfile: vmcore size estimate

Vivek Goyal vgoyal at redhat.com
Mon Jun 23 05:57:23 PDT 2014


On Fri, Jun 20, 2014 at 01:07:52AM +0000, Atsushi Kumagai wrote:
> Hello Baoquan,
> 
> >Forget to mention only x86-64 is processed in this patch.
> >
> >On 06/11/14 at 08:39pm, Baoquan He wrote:
> >> User want to get a rough estimate of vmcore size, then they can decide
> >> how much storage space is reserved for vmcore dumping. This can help them
> >> to deploy their machines better, possibly hundreds of machines.
> 
> You suggested this feature before, but I don't still agree with this.
> 
> No one can guarantee that the vmcore size will be below the estimated
> size every time. However, if makedumpfile provides "--vmcore-estimate",
> some users may trust it completely and disk overflow might happen. 
> Ideally, users should prepare the disk which can store the possible
> maximum size of vmcore. Of course they can reduce the disk size on their
> responsibility, but makedumpfile can't help it as official feature.

Hi Atsushi,

Recently quite a few people have asked us for this feature. They manage
lots of system and have attached local disk or partition for saving
dump. Now say a system has few Tera bytes of memory and dedicating one
partition of size of few tera bytes per machine just for saving dump might
not be very practical.

I was given the example that AIX supports this kind of estimates too and
in fact looks like they leave a message if they find that current dump
partition size will not be sufficient to save dump. 

I think it is a good idea to try to solve this problem. We might not be
accurate but it will be better than user guessing that by how much to
reduce the partition size.

I am wondering what are the technical concerns. IIUC, biggest problem is that
number of pages dumped will vary as system continues to run. So
immediately after boot number of pages to be dumped might be small but
as more applications are launched, number of pages to be dumped will
incresae, most likely.

We can try to mitigate above problem by creating a new service which can
run at configured interval and check the size of memory required for
dump and size of dump partition configured. And user can either disable
this service or configure it to run every hour or every day or every week
or any interval they like to. 

So as long as we can come up with a tool which can guess number of pages
to be dumped fairly accurately, we should have a reasonably good system.
It will atleast be much better than user guessing the size of dump parition.

Thanks
Vivek



More information about the kexec mailing list