KVM: x86: Fix device passthrough when SME is active
Linux-MTD Mailing List
linux-mtd at lists.infradead.org
Mon Mar 19 02:59:04 PDT 2018
Gitweb: http://git.infradead.org/?p=mtd-2.6.git;a=commit;h=daaf216c06fba4ee4dc3f62715667da929d68774
Commit: daaf216c06fba4ee4dc3f62715667da929d68774
Parent: 52be7a467e4b45b0d8d3b700729fc65a9b8ebc94
Author: Tom Lendacky <thomas.lendacky at amd.com>
AuthorDate: Thu Mar 8 17:17:31 2018 -0600
Committer: Paolo Bonzini <pbonzini at redhat.com>
CommitDate: Fri Mar 16 14:32:23 2018 +0100
KVM: x86: Fix device passthrough when SME is active
When using device passthrough with SME active, the MMIO range that is
mapped for the device should not be mapped encrypted. Add a check in
set_spte() to insure that a page is not mapped encrypted if that page
is a device MMIO page as indicated by kvm_is_mmio_pfn().
Cc: <stable at vger.kernel.org> # 4.14.x-
Signed-off-by: Tom Lendacky <thomas.lendacky at amd.com>
Signed-off-by: Paolo Bonzini <pbonzini at redhat.com>
---
arch/x86/kvm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f551962ac294..763bb3bade63 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2770,8 +2770,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
else
pte_access &= ~ACC_WRITE_MASK;
+ if (!kvm_is_mmio_pfn(pfn))
+ spte |= shadow_me_mask;
+
spte |= (u64)pfn << PAGE_SHIFT;
- spte |= shadow_me_mask;
if (pte_access & ACC_WRITE_MASK) {
More information about the linux-mtd-cvs
mailing list