[PATCH 2/2] KVM: arm64: Map hyp text as RO and dump instr on panic

Ben Horgan ben.horgan at arm.com
Fri Jul 18 03:35:59 PDT 2025


Hi Mostafa,

On 18/07/2025 11:22, Mostafa Saleh wrote:
> Hi Ben,
> 
> On Fri, Jul 18, 2025 at 11:16:18AM +0100, Ben Horgan wrote:
>> Hi Mostafa,
>>
>> On 18/07/2025 00:47, Mostafa Saleh wrote:
>>> Map the hyp text section as RO, there are no secrets there
>>> and that allows the kernel extract info for debugging.
>>>
>>> As in case of panic we can now dump the faulting instructions
>>> similar to the kernel.
>>>
>>> Signed-off-by: Mostafa Saleh <smostafa at google.com>
>>> ---
>>>    arch/arm64/kvm/handle_exit.c    |  4 +---
>>>    arch/arm64/kvm/hyp/nvhe/setup.c | 12 ++++++++++--
>>>    2 files changed, 11 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
>>> index de12b4d4bccd..d59f33c40767 100644
>>> --- a/arch/arm64/kvm/handle_exit.c
>>> +++ b/arch/arm64/kvm/handle_exit.c
>>> @@ -566,9 +566,7 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
>>>    	kvm_nvhe_dump_backtrace(hyp_offset);
>>>    	/* Dump the faulting instruction */
>>> -	if (!is_protected_kvm_enabled() ||
>>> -	    IS_ENABLED(CONFIG_NVHE_EL2_DEBUG))
>>> -		dump_instr(panic_addr + kaslr_offset());
>>> +	dump_instr(panic_addr + kaslr_offset());
>> This makes the dumping in nvhe no longer conditional on
>> CONFIG_NVHE_EL2_DEBUG. A change from what you introduced in the patch.
>> Perhaps it makes sense to reorder the patches; do the preparatory work for
>> instruction dumping before the enabling.>
> 
> Yes, I thought about squashing both patches, but I was worried this patch
> might be more controversial, so I split the code into 2 patches, where the
> first one can be merged separately if needed. But no strong opinion.

My concern was you were changing the non-pkvm case too in this patch but 
I see now that you weren't. Sorry, my mistake. I'm ok with this patch 
split.>
> Thanks,
> Mostafa
> 
>>>    	/*
>>>    	 * Hyp has panicked and we're going to handle that by panicking the
>>> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
>>> index a48d3f5a5afb..90bd014e952f 100644
>>> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
>>> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
>>> @@ -192,6 +192,7 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx,
>>>    	enum pkvm_page_state state;
>>>    	struct hyp_page *page;
>>>    	phys_addr_t phys;
>>> +	enum kvm_pgtable_prot prot;
>>>    	if (!kvm_pte_valid(ctx->old))
>>>    		return 0;
>>> @@ -210,11 +211,18 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx,
>>>    	 * configured in the hypervisor stage-1, and make sure to propagate them
>>>    	 * to the hyp_vmemmap state.
>>>    	 */
>>> -	state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(ctx->old));
>>> +	prot = kvm_pgtable_hyp_pte_prot(ctx->old);
>>> +	state = pkvm_getstate(prot);
>>>    	switch (state) {
>>>    	case PKVM_PAGE_OWNED:
>>>    		set_hyp_state(page, PKVM_PAGE_OWNED);
>>> -		return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP);
>>> +		/* hyp text is RO in the host stage-2 to be inspected on panic. */
>>> +		if (prot == PAGE_HYP_EXEC) {
>>> +			set_host_state(page, PKVM_NOPAGE);
>>> +			return host_stage2_idmap_locked(phys, PAGE_SIZE, KVM_PGTABLE_PROT_R);
>>> +		} else {
>>> +			return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP);
>>> +		}
>>>    	case PKVM_PAGE_SHARED_OWNED:
>>>    		set_hyp_state(page, PKVM_PAGE_SHARED_OWNED);
>>>    		set_host_state(page, PKVM_PAGE_SHARED_BORROWED);
>> -- 
>> Thanks,
>>
>> Ben
>>
Thanks,

Ben




More information about the linux-arm-kernel mailing list