[PATCH v5 3/5] vmcore: Introduce remap_oldmem_pfn_range()
HATAYAMA Daisuke
d.hatayama at jp.fujitsu.com
Wed Jun 12 21:32:48 EDT 2013
(2013/06/12 18:13), Michael Holzheu wrote:
> On Tue, 11 Jun 2013 21:42:15 +0900
> HATAYAMA Daisuke <d.hatayama at gmail.com> wrote:
>
>> 2013/6/11 Michael Holzheu <holzheu at linux.vnet.ibm.com>:
>>> On Mon, 10 Jun 2013 22:40:24 +0900
>>> HATAYAMA Daisuke <d.hatayama at gmail.com> wrote:
>>>
>>>> 2013/6/8 Michael Holzheu <holzheu at linux.vnet.ibm.com>:
<cut>
> static int mmap_vmcore_fault(struct vm_area_struct *vma, struct'vm_fault *vmf)
> {
> struct address_space *mapping = vma->vm_private_data;
> pgoff_t index = vmf->pgoff;
> struct page *page;
> loff_t src;
> char *buf;
>
> page = find_or_create_page(mapping, index, GFP_KERNEL);
> if (!page)
> return VM_FAULT_OOM;
> if (!PageUptodate(page)) {
> src = index << PAGE_CACHE_SHIFT;
> buf = (void *) (page_to_pfn(page) << PAGE_SHIFT);
> if (__read_vmcore(buf, PAGE_SIZE, &src, 0) < 0) {
> unlock_page(page);
> return VM_FAULT_SIGBUS;
> }
> SetPageUptodate(page);
> }
> unlock_page(page);
> vmf->page = page;
> return 0;
> }
>
> Perhaps one open issue remains:
>
> Can we remove the page from the page cache if __read_vmcore() fails?
>
Yes, use page_cache_release() after unlocking the page like:
if (__read_vmcore(buf, PAGE_SIZE, &src, 0) < 0) {
unlock_page(page);
+ page_cache_release(page);
return VM_FAULT_SIGBUS;
}
BTW, you now keep file->f_mapping in vma->vm_private_data, but the vma already has the file object in its vma->vm_file member. You can get the mapping by vma->vm_file->f_mapping without necessity of vma->vm_private_data.
--
Thanks.
HATAYAMA, Daisuke
More information about the kexec
mailing list