[RFC][nvdimm][crash] pmem memmap dump support
lizhijian at fujitsu.com
lizhijian at fujitsu.com
Fri Mar 17 00:30:25 PDT 2023
On 17/03/2023 14:12, Dan Williams wrote:
> lizhijian at fujitsu.com wrote:
> [..]
>> Case D: unsupported && need your input To support this situation, the
>> makedumpfile needs to know the location of metadata for each pmem
>> namespace and the address and size of metadata in the pmem [start,
>> end)
>
> My first reaction is that you should copy what the ndctl utility does
> when it needs to manipulate or interrogate the metadata space.
>
> For example, see namespace_rw_infoblock():>
> https://github.com/pmem/ndctl/blob/main/ndctl/namespace.c#L2022
>
> That facility uses the force_raw attribute
> ("/sys/bus/nd/devices/namespaceX.Y/force_raw") to arrange for the
> namespace to initalize without considering any pre-existing metdata
> *and* without overwriting it. In that mode makedumpfile can walk the
> namespaces and retrieve the metadata written by the previous kernel.
For the dumping application(makedumpfile or cp), it will/should reads /proc/vmcore to construct the dumpfile,
So makedumpfile need to know the *address* and *size/end* of metadata in the view of 1st kernel address space.
I haven't known much about namespace_rw_infoblock() , so it is also an option if we can know such information from it.
My current WIP propose is to export a list linking all pmem namespaces to vmcore, with this, the kdump kernel don't need to
rely on the pmem driver.
Thanks
Zhijian
>
> The module to block to allow makedumpfile to access the namespace in raw
> mode is the nd_pmem module, or if it is builtin the
> nd_pmem_driver_init() initcall.
More information about the kexec
mailing list