[PATCH v12 3/8] arm64: mte: Sync tags for pages where PTE is untagged

Marc Zyngier maz at kernel.org
Mon May 17 09:14:46 PDT 2021


On Mon, 17 May 2021 13:32:34 +0100,
Steven Price <steven.price at arm.com> wrote:
> 
> A KVM guest could store tags in a page even if the VMM hasn't mapped
> the page with PROT_MTE. So when restoring pages from swap we will
> need to check to see if there are any saved tags even if !pte_tagged().
> 
> However don't check pages for which pte_access_permitted() returns false
> as these will not have been swapped out.
> 
> Signed-off-by: Steven Price <steven.price at arm.com>
> ---
>  arch/arm64/include/asm/pgtable.h |  9 +++++++--
>  arch/arm64/kernel/mte.c          | 16 ++++++++++++++--
>  2 files changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 0b10204e72fc..275178a810c1 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -314,8 +314,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
>  		__sync_icache_dcache(pte);
>  
> -	if (system_supports_mte() &&
> -	    pte_present(pte) && pte_tagged(pte) && !pte_special(pte))
> +	/*
> +	 * If the PTE would provide user space access to the tags associated
> +	 * with it then ensure that the MTE tags are synchronised.  Exec-only
> +	 * mappings don't expose tags (instruction fetches don't check tags).

I'm not sure I understand this comment. Of course, execution doesn't
match tags. But the memory could still have tags associated with
it. Does this mean such a page would lose its tags is swapped out?

Thanks,

	M.

> +	 */
> +	if (system_supports_mte() && pte_present(pte) &&
> +	    pte_access_permitted(pte, false) && !pte_special(pte))
>  		mte_sync_tags(ptep, pte);
>  
>  	__check_racy_pte_update(mm, ptep, pte);
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index c88e778c2fa9..a604818c52c1 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -33,11 +33,15 @@ DEFINE_STATIC_KEY_FALSE(mte_async_mode);
>  EXPORT_SYMBOL_GPL(mte_async_mode);
>  #endif
>  
> -static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
> +static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap,
> +			       bool pte_is_tagged)
>  {
>  	unsigned long flags;
>  	pte_t old_pte = READ_ONCE(*ptep);
>  
> +	if (!is_swap_pte(old_pte) && !pte_is_tagged)
> +		return;
> +
>  	spin_lock_irqsave(&tag_sync_lock, flags);
>  
>  	/* Recheck with the lock held */
> @@ -53,6 +57,9 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
>  		}
>  	}
>  
> +	if (!pte_is_tagged)
> +		goto out;
> +
>  	page_kasan_tag_reset(page);
>  	/*
>  	 * We need smp_wmb() in between setting the flags and clearing the
> @@ -76,10 +83,15 @@ void mte_sync_tags(pte_t *ptep, pte_t pte)
>  	bool check_swap = nr_pages == 1;
>  	bool pte_is_tagged = pte_tagged(pte);
>  
> +	/* Early out if there's nothing to do */
> +	if (!check_swap && !pte_is_tagged)
> +		return;
> +
>  	/* if PG_mte_tagged is set, tags have already been initialised */
>  	for (i = 0; i < nr_pages; i++, page++) {
>  		if (!test_bit(PG_mte_tagged, &page->flags))
> -			mte_sync_page_tags(page, ptep, check_swap);
> +			mte_sync_page_tags(page, ptep, check_swap,
> +					   pte_is_tagged);
>  	}
>  }
>  
> -- 
> 2.20.1
> 
> 

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list