[PATCH v12 1/8] arm64: mte: Handle race when synchronising tags

Marc Zyngier maz at kernel.org
Mon May 17 07:03:59 PDT 2021


Hi Steven,

On Mon, 17 May 2021 13:32:32 +0100,
Steven Price <steven.price at arm.com> wrote:
> 
> mte_sync_tags() used test_and_set_bit() to set the PG_mte_tagged flag
> before restoring/zeroing the MTE tags. However if another thread were to
> race and attempt to sync the tags on the same page before the first
> thread had completed restoring/zeroing then it would see the flag is
> already set and continue without waiting. This would potentially expose
> the previous contents of the tags to user space, and cause any updates
> that user space makes before the restoring/zeroing has completed to
> potentially be lost.
> 
> Since this code is run from atomic contexts we can't just lock the page
> during the process. Instead implement a new (global) spinlock to protect
> the mte_sync_page_tags() function.
> 
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Signed-off-by: Steven Price <steven.price at arm.com>
> ---
> ---
>  arch/arm64/kernel/mte.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 125a10e413e9..c88e778c2fa9 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -25,6 +25,7 @@
>  u64 gcr_kernel_excl __ro_after_init;
>  
>  static bool report_fault_once = true;
> +static spinlock_t tag_sync_lock;

What initialises this spinlock? Have you tried this with lockdep? I'd
expect it to be defined with DEFINE_SPINLOCK(), which always does the
right thing.

>  
>  #ifdef CONFIG_KASAN_HW_TAGS
>  /* Whether the MTE asynchronous mode is enabled. */
> @@ -34,13 +35,22 @@ EXPORT_SYMBOL_GPL(mte_async_mode);
>  
>  static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
>  {
> +	unsigned long flags;
>  	pte_t old_pte = READ_ONCE(*ptep);
>  
> +	spin_lock_irqsave(&tag_sync_lock, flags);
> +
> +	/* Recheck with the lock held */
> +	if (test_bit(PG_mte_tagged, &page->flags))
> +		goto out;
> +
>  	if (check_swap && is_swap_pte(old_pte)) {
>  		swp_entry_t entry = pte_to_swp_entry(old_pte);
>  
> -		if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
> -			return;
> +		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
> +			set_bit(PG_mte_tagged, &page->flags);
> +			goto out;
> +		}
>  	}
>  
>  	page_kasan_tag_reset(page);
> @@ -53,6 +63,10 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
>  	 */
>  	smp_wmb();
>  	mte_clear_page_tags(page_address(page));
> +	set_bit(PG_mte_tagged, &page->flags);
> +
> +out:
> +	spin_unlock_irqrestore(&tag_sync_lock, flags);
>  }
>  
>  void mte_sync_tags(pte_t *ptep, pte_t pte)
> @@ -60,10 +74,11 @@ void mte_sync_tags(pte_t *ptep, pte_t pte)
>  	struct page *page = pte_page(pte);
>  	long i, nr_pages = compound_nr(page);
>  	bool check_swap = nr_pages == 1;
> +	bool pte_is_tagged = pte_tagged(pte);
>  
>  	/* if PG_mte_tagged is set, tags have already been initialised */
>  	for (i = 0; i < nr_pages; i++, page++) {
> -		if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> +		if (!test_bit(PG_mte_tagged, &page->flags))
>  			mte_sync_page_tags(page, ptep, check_swap);
>  	}
>  }

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list