[PATCH V2] arm64/mm: Intercept pfn changes in set_pte_at()

Muchun Song muchun.song at linux.dev
Thu Feb 2 01:51:39 PST 2023



> On Feb 1, 2023, at 20:20, Catalin Marinas <catalin.marinas at arm.com> wrote:
> 
> On Tue, Jan 31, 2023 at 03:49:51PM +0000, Will Deacon wrote:
>> On Fri, Jan 27, 2023 at 12:43:17PM +0000, Robin Murphy wrote:
>>> On 2023-01-26 13:33, Will Deacon wrote:
>>>> On Tue, Jan 24, 2023 at 11:11:49AM +0530, Anshuman Khandual wrote:
>>>>> On 1/9/23 10:58, Anshuman Khandual wrote:
>>>>>> Changing pfn on a user page table mapped entry, without first going through
>>>>>> break-before-make (BBM) procedure is unsafe. This just updates set_pte_at()
>>>>>> to intercept such changes, via an updated pgattr_change_is_safe(). This new
>>>>>> check happens via __check_racy_pte_update(), which has now been renamed as
>>>>>> __check_safe_pte_update().
>>>>>> 
>>>>>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>>>>>> Cc: Will Deacon <will at kernel.org>
>>>>>> Cc: Mark Rutland <mark.rutland at arm.com>
>>>>>> Cc: Andrew Morton <akpm at linux-foundation.org>
>>>>>> Cc: linux-arm-kernel at lists.infradead.org
>>>>>> Cc: linux-kernel at vger.kernel.org
>>>>>> Signed-off-by: Anshuman Khandual <anshuman.khandual at arm.com>
>>>>>> ---
>>>>>> This applies on v6.2-rc3. This patch had some test time on an internal CI
>>>>>> system without any issues being reported.
>>>>> 
>>>>> Gentle ping, any updates on this patch ? Still any concerns ?
>>>> 
>>>> I don't think we really got to the bottom of Mark's concerns with
>>>> unreachable ptes on the stack, did we? I also have vague recollections
>>>> of somebody (Robin?) running into issues with the vmap code not honouring
>>>> BBM.
>>> 
>>> Doesn't ring a bell, so either it wasn't me, or it was many years ago and
>>> about 5 levels deep into trying to fix something else :/
>> 
>> Bah, sorry! Catalin reckons it may have been him talking about the vmemmap.
> 
> Indeed. The discussion with Anshuman started from this thread:
> 
> https://lore.kernel.org/all/20221025014215.3466904-1-mawupeng1@huawei.com/
> 
> We already trip over the existing checks even without Anshuman's patch,
> though only by chance. We are not setting the software PTE_DIRTY on the
> new pte (we don't bother with this bit for kernel mappings).
> 
> Given that the vmemmap ptes are still live when such change happens and
> no-one came with a solution to the break-before-make problem, I propose
> we revert the arm64 part of commit 47010c040dec ("mm: hugetlb_vmemmap:
> cleanup CONFIG_HUGETLB_PAGE_FREE_VMEMMAP*"). We just need this hunk:
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 27b2592698b0..5263454a5794 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -100,7 +100,6 @@ config ARM64
> 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> 	select ARCH_WANT_FRAME_POINTERS
> 	select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
> -	select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP

Maybe it is a little overkill for HVO as it can significantly minimize the
overhead of vmemmap on ARM64 servers for some workloads (like qemu, DPDK).
So I don't think disabling it is a good approach. Indeed, HVO broke BBM,
but the waring does not affect anything since the tail vmemmap pages are
supposed to be read-only. So, I suggest skipping warnings if it is the
vmemmap address in set_pte_at(). What do you think of?

Muchun,
Thanks.

> 	select ARCH_WANT_LD_ORPHAN_WARN
> 	select ARCH_WANTS_NO_INSTR
> 	select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
> 
> -- 
> Catalin




More information about the linux-arm-kernel mailing list