[PATCH v2 17/43] arm64: RME: Allow VMM to set RIPAS

Suzuki K Poulose suzuki.poulose at arm.com
Wed May 1 07:56:33 PDT 2024


On 01/05/2024 15:27, Jean-Philippe Brucker wrote:
> On Fri, Apr 12, 2024 at 09:42:43AM +0100, Steven Price wrote:
>> +static inline bool realm_is_addr_protected(struct realm *realm,
>> +					   unsigned long addr)
>> +{
>> +	unsigned int ia_bits = realm->ia_bits;
>> +
>> +	return !(addr & ~(BIT(ia_bits - 1) - 1));
> 
> Is it enough to return !(addr & BIT(realm->ia_bits - 1))?

I thought about that too. But if we are dealing with an IPA
that is > (BIT(realm->ia_bits)), we don't want to be treating
that as a protected address. This could only happen if the Realm
is buggy (or the VMM has tricked it). So the existing check
looks safer.

> 
>> +static void realm_unmap_range_shared(struct kvm *kvm,
>> +				     int level,
>> +				     unsigned long start,
>> +				     unsigned long end)
>> +{
>> +	struct realm *realm = &kvm->arch.realm;
>> +	unsigned long rd = virt_to_phys(realm->rd);
>> +	ssize_t map_size = rme_rtt_level_mapsize(level);
>> +	unsigned long next_addr, addr;
>> +	unsigned long shared_bit = BIT(realm->ia_bits - 1);
>> +
>> +	if (WARN_ON(level > RME_RTT_MAX_LEVEL))
>> +		return;
>> +
>> +	start |= shared_bit;
>> +	end |= shared_bit;
>> +
>> +	for (addr = start; addr < end; addr = next_addr) {
>> +		unsigned long align_addr = ALIGN(addr, map_size);
>> +		int ret;
>> +
>> +		next_addr = ALIGN(addr + 1, map_size);
>> +
>> +		if (align_addr != addr || next_addr > end) {
>> +			/* Need to recurse deeper */
>> +			if (addr < align_addr)
>> +				next_addr = align_addr;
>> +			realm_unmap_range_shared(kvm, level + 1, addr,
>> +						 min(next_addr, end));
>> +			continue;
>> +		}
>> +
>> +		ret = rmi_rtt_unmap_unprotected(rd, addr, level, &next_addr);
>> +		switch (RMI_RETURN_STATUS(ret)) {
>> +		case RMI_SUCCESS:
>> +			break;
>> +		case RMI_ERROR_RTT:
>> +			if (next_addr == addr) {
>> +				next_addr = ALIGN(addr + 1, map_size);
>> +				realm_unmap_range_shared(kvm, level + 1, addr,
>> +							 next_addr);
>> +			}
>> +			break;
>> +		default:
>> +			WARN_ON(1);
> 
> In this case we also need to return, because RMM returns with next_addr ==
> 0, causing an infinite loop. At the moment a VMM can trigger this easily
> by creating guest memfd before creating a RD, see below

Thats a good point. I agree.

> 
>> +		}
>> +	}
>> +}
>> +
>> +static void realm_unmap_range_private(struct kvm *kvm,
>> +				      unsigned long start,
>> +				      unsigned long end)
>> +{
>> +	struct realm *realm = &kvm->arch.realm;
>> +	ssize_t map_size = RME_PAGE_SIZE;
>> +	unsigned long next_addr, addr;
>> +
>> +	for (addr = start; addr < end; addr = next_addr) {
>> +		int ret;
>> +
>> +		next_addr = ALIGN(addr + 1, map_size);
>> +
>> +		ret = realm_destroy_protected(realm, addr, &next_addr);
>> +
>> +		if (WARN_ON(ret))
>> +			break;
>> +	}
>> +}
>> +
>> +static void realm_unmap_range(struct kvm *kvm,
>> +			      unsigned long start,
>> +			      unsigned long end,
>> +			      bool unmap_private)
>> +{
> 
> Should this check for a valid kvm->arch.realm.rd, or a valid realm state?
> I'm not sure what the best place is but none of the RMM calls will succeed
> if the RD is NULL, causing some WARNs.
> 
> I can trigger this with set_memory_attributes() ioctls before creating a
> RD for example.
> 

True, this could be triggered by a buggy VMM in other ways, and we could
easily gate it on the Realm state >= NEW.

Suzuki





More information about the linux-arm-kernel mailing list