[PATCH v12 66/84] KVM: LoongArch: Mark "struct page" pfn accessed before dropping mmu_lock
maobibo
maobibo at loongson.cn
Thu Aug 8 04:47:48 PDT 2024
On 2024/7/27 上午7:52, Sean Christopherson wrote:
> Mark pages accessed before dropping mmu_lock when faulting in guest memory
> so that LoongArch can convert to kvm_release_faultin_page() without
> tripping its lockdep assertion on mmu_lock being held.
>
> Signed-off-by: Sean Christopherson <seanjc at google.com>
> ---
> arch/loongarch/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 52b5c16cf250..230cafa178d7 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.cBibo Mao <maobibo at loongson.cn>
> @@ -902,13 +902,13 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>
> if (writeable)
> kvm_set_pfn_dirty(pfn);
> + kvm_release_pfn_clean(pfn);
>
> spin_unlock(&kvm->mmu_lock);
>
> if (prot_bits & _PAGE_DIRTY)
> mark_page_dirty_in_slot(kvm, memslot, gfn);
>
> - kvm_release_pfn_clean(pfn);
> out:
> srcu_read_unlock(&kvm->srcu, srcu_idx);
> return err;
>
Reviewed-by: Bibo Mao <maobibo at loongson.cn>
More information about the linux-riscv
mailing list