[PATCH 2/2] kvm/arm64: Detach ESR operator from vCPU struct

Gavin Shan gshan at redhat.com
Mon Jun 29 20:28:50 EDT 2020


Hi Andrew,

On 6/29/20 7:59 PM, Andrew Scull wrote:
> On Mon, Jun 29, 2020 at 07:18:41PM +1000, Gavin Shan wrote:
>> There are a set of inline functions defined in kvm_emulate.h. Those
>> functions reads ESR from vCPU fault information struct and then operate
>> on it. So it's tied with vCPU fault information and vCPU struct. It
>> limits their usage scope.
>>
>> This detaches these functions from the vCPU struct by introducing an
>> other set of inline functions in esr.h to manupulate the specified
>> ESR value. With it, the inline functions defined in kvm_emulate.h
>> can call these inline functions (in esr.h) instead. This shouldn't
>> cause any functional changes.
>>
>> Signed-off-by: Gavin Shan <gshan at redhat.com>
>> ---
>>   arch/arm64/include/asm/esr.h         | 32 +++++++++++++++++++++
>>   arch/arm64/include/asm/kvm_emulate.h | 43 ++++++++++++----------------
>>   2 files changed, 51 insertions(+), 24 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
>> index 035003acfa87..950204c5fbe1 100644
>> --- a/arch/arm64/include/asm/esr.h
>> +++ b/arch/arm64/include/asm/esr.h
>> @@ -326,6 +326,38 @@ static inline bool esr_is_data_abort(u32 esr)
>>   	return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
>>   }
>>   
>> +#define ESR_DECLARE_CHECK_FUNC(name, field)	\
>> +static inline bool esr_is_##name(u32 esr)	\
>> +{						\
>> +	return !!(esr & (field));		\
>> +}
>> +#define ESR_DECLARE_GET_FUNC(name, mask, shift)	\
>> +static inline u32 esr_get_##name(u32 esr)	\
>> +{						\
>> +	return ((esr & (mask)) >> (shift));	\
>> +}
> 
> Should these be named DEFINE rather than DECLARE given it also includes
> the function definition?
> 

Thanks for your comments. Indeed, I think DEFINE is better than
DECLARE. These newly introduced helpers are unlikely needed basing
on the comments (and followup) from Mark Rutland.

>> +
>> +ESR_DECLARE_CHECK_FUNC(il_32bit,   ESR_ELx_IL);
>> +ESR_DECLARE_CHECK_FUNC(condition,  ESR_ELx_CV);
>> +ESR_DECLARE_CHECK_FUNC(dabt_valid, ESR_ELx_ISV);
>> +ESR_DECLARE_CHECK_FUNC(dabt_sse,   ESR_ELx_SSE);
>> +ESR_DECLARE_CHECK_FUNC(dabt_sf,    ESR_ELx_SF);
>> +ESR_DECLARE_CHECK_FUNC(dabt_s1ptw, ESR_ELx_S1PTW);
>> +ESR_DECLARE_CHECK_FUNC(dabt_write, ESR_ELx_WNR);
>> +ESR_DECLARE_CHECK_FUNC(dabt_cm,    ESR_ELx_CM);
>> +
>> +ESR_DECLARE_GET_FUNC(class,        ESR_ELx_EC_MASK,      ESR_ELx_EC_SHIFT);
>> +ESR_DECLARE_GET_FUNC(fault,        ESR_ELx_FSC,          0);
>> +ESR_DECLARE_GET_FUNC(fault_type,   ESR_ELx_FSC_TYPE,     0);
>> +ESR_DECLARE_GET_FUNC(condition,    ESR_ELx_COND_MASK,    ESR_ELx_COND_SHIFT);
>> +ESR_DECLARE_GET_FUNC(hvc_imm,      ESR_ELx_xVC_IMM_MASK, 0);
>> +ESR_DECLARE_GET_FUNC(dabt_iss_nisv_sanitized,
>> +		     (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC), 0);
>> +ESR_DECLARE_GET_FUNC(dabt_rd,      ESR_ELx_SRT_MASK,     ESR_ELx_SRT_SHIFT);
>> +ESR_DECLARE_GET_FUNC(dabt_as,      ESR_ELx_SAS,          ESR_ELx_SAS_SHIFT);
>> +ESR_DECLARE_GET_FUNC(sys_rt,       ESR_ELx_SYS64_ISS_RT_MASK,
>> +				   ESR_ELx_SYS64_ISS_RT_SHIFT);
>> +
>>   const char *esr_get_class_string(u32 esr);
>>   #endif /* __ASSEMBLY */
>>   
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
>> index c9ba0df47f7d..9337d90c517f 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -266,12 +266,8 @@ static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
>>   
>>   static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
>>   {
>> -	u32 esr = kvm_vcpu_get_esr(vcpu);
>> -
>> -	if (esr & ESR_ELx_CV)
>> -		return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
>> -
>> -	return -1;
>> +	return esr_is_condition(kvm_vcpu_get_esr(vcpu)) ?
>> +	       esr_get_condition(kvm_vcpu_get_esr(vcpu)) : -1;
>>   }
>>   
>>   static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu)
>> @@ -291,79 +287,79 @@ static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu)
>>   
>>   static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu)
>>   {
>> -	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK;
>> +	return esr_get_hvc_imm(kvm_vcpu_get_esr(vcpu));
>>   }
> 
> It feels a little strange that in the raw esr case it uses macro magic
> but in the vcpu cases here it writes everything out in full. Was there a
> reason that I'm missing or is there a chance to apply a consistent
> approach?
> 

The request was raised when RFCv2 async page fault patchset was posted.
When async page fault is handled, the ESR is cached in advance, not
fetched from vCPU struct. So we want to detach the helpers defined in
kvm_emulate.h from vCPU struct. Hope the discussion in the following
link can help you to understand a bit more:

https://lore.kernel.org/kvmarm/20200508032919.52147-5-gshan@redhat.com/

> I'm not sure of the style preferences, but if it goes the macro path,
> the esr field definitions could be reused with something x-macro like to
> get the kvm_emulate.h and esr.h functions generated from a singe list of
> the esr fields.
> 

Yeah, it's same thing as Mark Rutland suggested. As I replied to his
comments, it can be postponed when next revision of async page fault
patchset is posted.

[...]

Thanks,
Gavin




More information about the linux-arm-kernel mailing list