[PATCH v13 74/85] KVM: PPC: Use kvm_vcpu_map() to map guest memory to patch dcbz instructions
Sean Christopherson
seanjc at google.com
Thu Oct 10 11:24:16 PDT 2024
Use kvm_vcpu_map() when patching dcbz in guest memory, as a regular GUP
isn't technically sufficient when writing to data in the target pages.
As per Documentation/core-api/pin_user_pages.rst:
Correct (uses FOLL_PIN calls):
pin_user_pages()
write to the data within the pages
unpin_user_pages()
INCORRECT (uses FOLL_GET calls):
get_user_pages()
write to the data within the pages
put_page()
As a happy bonus, using kvm_vcpu_{,un}map() takes care of creating a
mapping and marking the page dirty.
Signed-off-by: Sean Christopherson <seanjc at google.com>
---
arch/powerpc/kvm/book3s_pr.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index cd7ab6d85090..83bcdc80ce51 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -639,28 +639,27 @@ static void kvmppc_set_pvr_pr(struct kvm_vcpu *vcpu, u32 pvr)
*/
static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte)
{
- struct page *hpage;
+ struct kvm_host_map map;
u64 hpage_offset;
u32 *page;
- int i;
+ int i, r;
- hpage = gfn_to_page(vcpu->kvm, pte->raddr >> PAGE_SHIFT);
- if (!hpage)
+ r = kvm_vcpu_map(vcpu, pte->raddr >> PAGE_SHIFT, &map);
+ if (r)
return;
hpage_offset = pte->raddr & ~PAGE_MASK;
hpage_offset &= ~0xFFFULL;
hpage_offset /= 4;
- page = kmap_atomic(hpage);
+ page = map.hva;
/* patch dcbz into reserved instruction, so we trap */
for (i=hpage_offset; i < hpage_offset + (HW_PAGE_SIZE / 4); i++)
if ((be32_to_cpu(page[i]) & 0xff0007ff) == INS_DCBZ)
page[i] &= cpu_to_be32(0xfffffff7);
- kunmap_atomic(page);
- put_page(hpage);
+ kvm_vcpu_unmap(vcpu, &map);
}
static bool kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
--
2.47.0.rc1.288.g06298d1525-goog
More information about the linux-riscv
mailing list