[PATCH v3 09/14] arm64: Enable memory encrypt for Realms
Catalin Marinas
catalin.marinas at arm.com
Mon Jun 10 10:27:22 PDT 2024
On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote:
> +static int __set_memory_encrypted(unsigned long addr,
> + int numpages,
> + bool encrypt)
> +{
> + unsigned long set_prot = 0, clear_prot = 0;
> + phys_addr_t start, end;
> + int ret;
> +
> + if (!is_realm_world())
> + return 0;
> +
> + if (!__is_lm_address(addr))
> + return -EINVAL;
> +
> + start = __virt_to_phys(addr);
> + end = start + numpages * PAGE_SIZE;
> +
> + /*
> + * Break the mapping before we make any changes to avoid stale TLB
> + * entries or Synchronous External Aborts caused by RIPAS_EMPTY
> + */
> + ret = __change_memory_common(addr, PAGE_SIZE * numpages,
> + __pgprot(0),
> + __pgprot(PTE_VALID));
> +
> + if (encrypt) {
> + clear_prot = PROT_NS_SHARED;
> + ret = rsi_set_memory_range_protected(start, end);
> + } else {
> + set_prot = PROT_NS_SHARED;
> + ret = rsi_set_memory_range_shared(start, end);
> + }
> +
> + if (ret)
> + return ret;
> +
> + set_prot |= PTE_VALID;
> +
> + return __change_memory_common(addr, PAGE_SIZE * numpages,
> + __pgprot(set_prot),
> + __pgprot(clear_prot));
> +}
This works, does break-before-make and also rejects vmalloc() ranges
(for the time being).
One particular aspect I don't like is doing the TLBI twice. It's
sufficient to do it when you first make the pte invalid. We could guess
this in __change_memory_common() if set_mask has PTE_VALID. The call
sites are restricted to this file, just add a comment. An alternative
would be to add a bool flush argument to this function.
--
Catalin
More information about the linux-arm-kernel
mailing list