[PATCH 3/4] arm64: mte: update code comments
Aneesh Kumar K.V (Arm)
aneesh.kumar at kernel.org
Mon Oct 28 02:40:13 PDT 2024
commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag
initialisation") updated the locking such the kernel now allows
VM_SHARED mapping with MTE. Update the code comment to reflect this.
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar at kernel.org>
---
arch/arm64/kvm/mmu.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index a509b63bd4dd..b5824e93cee0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
* able to see the page's tags and therefore they must be initialised first. If
* PG_mte_tagged is set, tags have already been initialised.
*
- * The race in the test/set of the PG_mte_tagged flag is handled by:
- * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
- * racing to santise the same page
- * - mmap_lock protects between a VM faulting a page in and the VMM performing
- * an mprotect() to add VM_MTE
+ * The race in the test/set of the PG_mte_tagged flag is handled by
+ * using PG_mte_lock and PG_mte_tagged together.
*/
static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
unsigned long size)
@@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
- /* Check the VMM hasn't introduced a new disallowed VMA */
+ /*
+ * not a permission fault implies a translation fault which
+ * means mapping the page for the first time
+ */
if (mte_allowed) {
sanitise_mte_tags(kvm, pfn, vma_pagesize);
} else {
--
2.43.0
More information about the linux-arm-kernel
mailing list