makefumpfile tool to estimate vmcore file size
Baoquan
bhe at redhat.com
Wed Jul 17 03:58:30 EDT 2013
Hi Atsushi,
Our customer want us to provide a tool to estimate the required dump
file size based on the current system memory footprint. The following is
detailed requirement I tried to conclude, what's your opinion?
In customer's place there are thousands of machines and they don't want
to budget for significant increases in storage if unnecessary. This
becomes particularly expensive with large memory (1tb+) systems booting
off san disk.
Customer would like to achieve this by below example:
##########################################################
#makedumpfile -d31 -c/-l/-p
TYPE PAGES INCLUDED
Zero Page x no
Cache Without Private x no
Cache With Private x no
User Data x no
Free Page x no
Kernel Code x yes
Kernel Data x yes
Total Pages on system: 311000 (Just for example)
Total Pages included in kdump: 160000 (Just for example)
Estimated vmcore file size: 48000 (30% compression ratio)
##########################################################
By configured dump level, total pages included in kdump can be computed.
Then with option which specify a compression algorithm, an estimated
vmcore file size can be given. Though the estimated value is changed
dynamically with time, it does give user a valuable reference.
Thanks a lot
Baoquan
More information about the kexec
mailing list