[PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c

Jonathan Cameron jonathan.cameron at huawei.com
Tue Mar 17 10:50:47 PDT 2026


On Mon, 16 Mar 2026 17:35:27 +0000
Fuad Tabba <tabba at google.com> wrote:

> Migrate manual mutex_lock() and mutex_unlock() calls managing
> kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the
> guard(mutex) macro.
> 
> This eliminates manual unlock calls on return paths and simplifies
> error handling by replacing unlock goto labels with direct returns.
> Centralized cleanup goto paths are preserved with manual unlocks
> removed.
> 
> Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb
> Signed-off-by: Fuad Tabba <tabba at google.com>

> @@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned long *haddr)
>  	unsigned long base;
>  	int ret = 0;
>  
> -	mutex_lock(&kvm_hyp_pgd_mutex);
> -
> -	/*
> -	 * This assumes that we have enough space below the idmap
> -	 * page to allocate our VAs. If not, the check in
> -	 * __hyp_alloc_private_va_range() will kick. A potential
> -	 * alternative would be to detect that overflow and switch
> -	 * to an allocation above the idmap.
> -	 *
> -	 * The allocated size is always a multiple of PAGE_SIZE.
> -	 */
> -	size = PAGE_ALIGN(size);
> -	base = io_map_base - size;
> -	ret = __hyp_alloc_private_va_range(base);
> -
> -	mutex_unlock(&kvm_hyp_pgd_mutex);
> +	scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> +		/*
> +		 * This assumes that we have enough space below the idmap
> +		 * page to allocate our VAs. If not, the check in
> +		 * __hyp_alloc_private_va_range() will kick. A potential
> +		 * alternative would be to detect that overflow and switch
> +		 * to an allocation above the idmap.
> +		 *
> +		 * The allocated size is always a multiple of PAGE_SIZE.
> +		 */
> +		size = PAGE_ALIGN(size);
> +		base = io_map_base - size;
> +		ret = __hyp_alloc_private_va_range(base);
Minor one and matter of taste, but I'd do
		if (ret)
			return ret;
	}
	*hwaddr = base;

	return 0;

> +	}
>  
>  	if (!ret)
>  		*haddr = base;
> @@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr)
>  	size_t size;
>  	int ret;
>  
> -	mutex_lock(&kvm_hyp_pgd_mutex);
> -	/*
> -	 * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> -	 * an alignment of our allocation on the order of the size.
> -	 */
> -	size = NVHE_STACK_SIZE * 2;
> -	base = ALIGN_DOWN(io_map_base - size, size);
> +	scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> +		/*
> +		 * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> +		 * an alignment of our allocation on the order of the size.
> +		 */
> +		size = NVHE_STACK_SIZE * 2;
> +		base = ALIGN_DOWN(io_map_base - size, size);
>  
> -	ret = __hyp_alloc_private_va_range(base);
> -
> -	mutex_unlock(&kvm_hyp_pgd_mutex);
> +		ret = __hyp_alloc_private_va_range(base);
> +	}
>  
>  	if (ret) {
>  		kvm_err("Cannot allocate hyp stack guard page\n");
Maybe move this under the guard just to keep the error check nearer the code
in question.

Thanks,

Jonathan

> 




More information about the linux-arm-kernel mailing list