[PATCH v4 35/49] KVM: arm64: GICv3: nv: Plug L1 LR sync into deactivation primitive
Vishnu Pajjuri
vishnu at os.amperecomputing.com
Mon Mar 30 23:31:54 PDT 2026
Hi Marc,
Many thanks for your reply.
On 30-03-2026 17:47, Marc Zyngier wrote:
> On Mon, 30 Mar 2026 12:51:51 +0100,
> Vishnu Pajjuri <vishnu at os.amperecomputing.com> wrote:
>>
>> Hi Fuad Tabba,
>
> To be brutally honest, I doubt Fuad really cares about NV,
I see Tested-by: fuad Tabba on this patch so tried to reach out him.
>
>> I'm trying to run nested VMs on Ampere platforms after this patch
>> series(v6.19+) but nested VMs are not booting and triggering soft
>> lockups on L0 and L0 hang. But just before this patch I could able to
>> successfully boot the Nested VMs.
>
> So the host dies? There isn't much here that interacts with the host
> at all. Worse case, the L1 dies by not making progress.
Initially L1 become unresponsive then L0 becomes unresponsive, soft
lockups and L0 hang.
>
>>
>> I bisected the failure to a single commit which is this patch which is
>> causing the issue.
>>
>> I would like to understand from you that did you observed anything
>> like that?
>
> No. If I had, I wouldn't have merged the series.
>
>>
>> Were you able to boot Nested VMs successfully after v6.19+?
>
> I boot L3s every day.
Do you mean L2s or L3s on top of L2s?
I running L1 and L2 using latest QEMU, do you use QEMU or kvmtool run L1
and L2 in your regression tests?
>
>> LOG:
>> [ 164.647367] Call trace:
>> [ 164.647368] smp_call_function_many_cond+0x334/0x7a0 (P)
>> [ 164.647372] smp_call_function_many+0x20/0x40
>> [ 164.647374] kvm_make_all_cpus_request+0xec/0x1b8
>> [ 164.647377] vgic_queue_irq_unlock+0x1c8/0x2c8
>> [ 164.647380] kvm_vgic_inject_irq+0x194/0x1e0
>> [ 164.647381] kvm_vm_ioctl_irq_line+0x170/0x400
>> [ 164.647386] kvm_vm_ioctl+0x7b8/0xc88
>> [ 164.647389] __arm64_sys_ioctl+0xb4/0x118
>> [ 164.647393] invoke_syscall+0x6c/0x100
>> [ 164.647397] el0_svc_common.constprop.0+0x48/0xf0
>> [ 164.647398] do_el0_svc+0x24/0x38
>> [ 164.647400] el0_svc+0x3c/0x170
>> [ 164.647403] el0t_64_sync_handler+0xa0/0xe8
>> [ 164.647405] el0t_64_sync+0x1b0/0x1b8
>
> This trace is about interrupt injection from userspace, not
> deactivation of a HW interrupt.
> None of that makes much sense.
Although this behavior is puzzling, it matches the trace I typically
observe on L0. After reverting the patch, I was able to boot L2 guests
successfully.
Regards,
-Vishnu
>
> M.
>
More information about the linux-arm-kernel
mailing list