dynamic oldmem in kdump kernel
Américo Wang
xiyou.wangcong at gmail.com
Thu Apr 7 06:23:07 EDT 2011
On Thu, Apr 7, 2011 at 5:56 PM, Olaf Hering <olaf at aepfle.de> wrote:
>
> Recently kdump for pv-on-hvm Xen guests was implemented by me.
>
> One issue remains:
> The xen_balloon driver in the guest frees guest pages and gives them
> back to the hypervisor. These pages are marked as mmio in the
> hypervisor. During a read of such a page via the /proc/vmcore interface
> the hypervisor calls the qemu-dm process. qemu-dm tries to map the page,
> this attempt fails because the page is not backed by ram and 0xff is
> returned. All this generates high load in dom0 because the reads come
> as 8byte requests.
>
> There seems to be no way to make the crash kernel aware of the state of
> individual pages in the crashed kernel, it is not aware of memory
> ballooning. And doing that from within the "kernel to crash" seems error
> prone. Since over time the fragmentation will increase, it would be best
> if the crash kernel itself queries the state of oldmem pages.
>
> If copy_oldmem_page() would call a function, a hook, provided by the Xen
> pv-on-hvm drivers to query if the pfn to read from is really backed by
> ram the load issue could be avoided. Unfortunately, even Xen needs to
> get a new interface to query the state of individual hvm guest pfns for
> the purpose mentioned above.
This makes sense for me, we might need a Xen-specific copy_oldmem_page()
hook and a native hook.
>
> Another issue, slightly related, is memory hotplug.
> How is this currently handled for kdump? Is there code which
> automatically reconfigures the kdump kernel with the new memory ranges?
>
No, the crashkernel memory is reserved during boot, and it is static after
that (except you can shrink this memory via /sys). Kdump isn't aware of
memory hotplug.
Thanks.
More information about the kexec
mailing list