[PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()

Tong Tiangen tongtiangen at huawei.com
Mon Aug 19 20:02:05 PDT 2024



在 2024/8/19 19:56, Jonathan Cameron 写道:
> On Tue, 28 May 2024 16:59:13 +0800
> Tong Tiangen <tongtiangen at huawei.com> wrote:
> 
>> Currently, many scenarios that can tolerate memory errors when copying page
>> have been supported in the kernel[1~5], all of which are implemented by
>> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>>
>> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
>> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
>> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>>
>> Add new helper copy_mc_page() which provide a page copy implementation with
>> hardware memory error safe. The code logic of copy_mc_page() is the same as
>> copy_page(), the main difference is that the ldp insn of copy_mc_page()
>> contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
>> main logic is extracted to copy_page_template.S.
>>
>> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
>> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
>> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
>> [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
>> [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
>>
>> Signed-off-by: Tong Tiangen <tongtiangen at huawei.com>
> Trivial stuff inline.
> 
> Jonathan

I'm sorry, I may not have understood what you meant. Where is the better
place to do inline? :)

Thanks,
Tong.

> 
> 
>> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
>> index 5018ac03b6bf..50ef24318281 100644
>> --- a/arch/arm64/lib/mte.S
>> +++ b/arch/arm64/lib/mte.S
>> @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
>>   	ret
>>   SYM_FUNC_END(mte_copy_page_tags)
>>   
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +/*
>> + * Copy the tags from the source page to the destination one wiht machine check safe
> Spell check.
> with >
> Also, maybe reword given machine check doesn't make sense on arm64.

OK.

> 
> 
>> + *   x0 - address of the destination page
>> + *   x1 - address of the source page
>> + * Returns:
>> + *   x0 - Return 0 if copy success, or
>> + *        -EFAULT if anything goes wrong while copying.
>> + */
>> +SYM_FUNC_START(mte_copy_mc_page_tags)
>> +	mov	x2, x0
>> +	mov	x3, x1
>> +	multitag_transfer_size x5, x6
>> +1:
>> +KERNEL_ME_SAFE(2f, ldgm	x4, [x3])
>> +	stgm	x4, [x2]
>> +	add	x2, x2, x5
>> +	add	x3, x3, x5
>> +	tst	x2, #(PAGE_SIZE - 1)
>> +	b.ne	1b
>> +
>> +	mov x0, #0
>> +	ret
>> +
>> +2:	mov x0, #-EFAULT
>> +	ret
>> +SYM_FUNC_END(mte_copy_mc_page_tags)
>> +#endif
>> +
>>   /*
>>    * Read tags from a user buffer (one tag per byte) and set the corresponding
>>    * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
>> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
>> index a7bb20055ce0..ff0d9ceea2a4 100644
>> --- a/arch/arm64/mm/copypage.c
>> +++ b/arch/arm64/mm/copypage.c
>> @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from,
> 
>> +
>> +int copy_mc_user_highpage(struct page *to, struct page *from,
>> +			unsigned long vaddr, struct vm_area_struct *vma)
>> +{
>> +	int ret;
>> +
>> +	ret = copy_mc_highpage(to, from);
>> +	if (!ret)
>> +		flush_dcache_page(to);
> Personally I'd always keep the error out of line as it tends to be
> more readable when reviewing a lot of code.
> 	if (ret)
> 		return ret;
> 
> 	flush_dcache_page(to);
> 
> 	return 0;

This is more reasonable, and it is more readable to eliminate errors in
time.

Thanks,
Tong.

>> +
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
>> +#endif
> 
> .



More information about the linux-arm-kernel mailing list