[PATCH v3 09/14] arm64: Enable memory encrypt for Realms

Steven Price steven.price at arm.com
Thu Jun 27 07:34:15 PDT 2024


On 10/06/2024 18:27, Catalin Marinas wrote:
> On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote:
>> +static int __set_memory_encrypted(unsigned long addr,
>> +				  int numpages,
>> +				  bool encrypt)
>> +{
>> +	unsigned long set_prot = 0, clear_prot = 0;
>> +	phys_addr_t start, end;
>> +	int ret;
>> +
>> +	if (!is_realm_world())
>> +		return 0;
>> +
>> +	if (!__is_lm_address(addr))
>> +		return -EINVAL;
>> +
>> +	start = __virt_to_phys(addr);
>> +	end = start + numpages * PAGE_SIZE;
>> +
>> +	/*
>> +	 * Break the mapping before we make any changes to avoid stale TLB
>> +	 * entries or Synchronous External Aborts caused by RIPAS_EMPTY
>> +	 */
>> +	ret = __change_memory_common(addr, PAGE_SIZE * numpages,
>> +				     __pgprot(0),
>> +				     __pgprot(PTE_VALID));
>> +
>> +	if (encrypt) {
>> +		clear_prot = PROT_NS_SHARED;
>> +		ret = rsi_set_memory_range_protected(start, end);
>> +	} else {
>> +		set_prot = PROT_NS_SHARED;
>> +		ret = rsi_set_memory_range_shared(start, end);
>> +	}
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	set_prot |= PTE_VALID;
>> +
>> +	return __change_memory_common(addr, PAGE_SIZE * numpages,
>> +				      __pgprot(set_prot),
>> +				      __pgprot(clear_prot));
>> +}
> 
> This works, does break-before-make and also rejects vmalloc() ranges
> (for the time being).
> 
> One particular aspect I don't like is doing the TLBI twice. It's
> sufficient to do it when you first make the pte invalid. We could guess
> this in __change_memory_common() if set_mask has PTE_VALID. The call
> sites are restricted to this file, just add a comment. An alternative
> would be to add a bool flush argument to this function.
> 

I'm always a bit scared of changing this sort of thing, but AFAICT the 
below should be safe:

-       flush_tlb_kernel_range(start, start + size);
+       /*
+        * If the memory is being made valid without changing any other bits
+        * then a TLBI isn't required as a non-valid entry cannot be cached in
+        * the TLB.
+        */
+       if (set_mask != PTE_VALID || clear_mask)
+               flush_tlb_kernel_range(start, start + size);

It will affect users of set_memory_valid() by removing the TLBI when 
marking memory as valid.

I'll add this change as a separate patch so it can be reverted easily
if there's something I've overlooked.

Thanks,
Steve




More information about the linux-arm-kernel mailing list