[PATCH] CHROMIUM: iommu: rockchip: Make sure that page table state is coherent

Tomasz Figa tfiga at chromium.org
Mon Feb 9 03:19:21 PST 2015


Even though the code uses the dt_lock spin lock to serialize mapping
operation from different threads, it does not protect from IOMMU
accesses that might be already taking place and thus altering state
of the IOTLB. This means that current mapping code which first zaps
the page table and only then updates it with new mapping which is
prone to mentioned race.

In addition, current code assumes that mappings are always > 4 MiB
(which translates to 1024 PTEs) and so they would always occupy
entire page tables. This is not true for mappings created by V4L2
Videobuf2 DMA contig allocator.

This patch changes the mapping code to always zap the page table
after it is updated, which avoids the aforementioned race and also
zap the last page of the mapping to make sure that stale data is
not cached from an already existing mapping.

Signed-off-by: Tomasz Figa <tfiga at chromium.org>
Reviewed-by: Daniel Kurtz <djkurtz at chromium.org>
---
 drivers/iommu/rockchip-iommu.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index 6a8b1ec..b06fe76 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -544,6 +544,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain,
 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
 }
 
+static void rk_iommu_zap_iova_first_last(struct rk_iommu_domain *rk_domain,
+					 dma_addr_t iova, size_t size)
+{
+	rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE);
+	if (size > SPAGE_SIZE)
+		rk_iommu_zap_iova(rk_domain, iova + size - SPAGE_SIZE,
+					SPAGE_SIZE);
+}
+
 static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain,
 				  dma_addr_t iova)
 {
@@ -568,12 +577,6 @@ static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain,
 	rk_table_flush(page_table, NUM_PT_ENTRIES);
 	rk_table_flush(dte_addr, 1);
 
-	/*
-	 * Zap the first iova of newly allocated page table so iommu evicts
-	 * old cached value of new dte from the iotlb.
-	 */
-	rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE);
-
 done:
 	pt_phys = rk_dte_pt_address(dte);
 	return (u32 *)phys_to_virt(pt_phys);
@@ -623,6 +626,14 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
 
 	rk_table_flush(pte_addr, pte_count);
 
+	/*
+	 * Zap the first and last iova to evict from iotlb any previously
+	 * mapped cachelines holding stale values for its dte and pte.
+	 * We only zap the first and last iova, since only they could have
+	 * dte or pte shared with an existing mapping.
+	 */
+	rk_iommu_zap_iova_first_last(rk_domain, iova, size);
+
 	return 0;
 unwind:
 	/* Unmap the range of iovas that we just mapped */
-- 
2.2.0.rc0.207.ga3a616c




More information about the Linux-rockchip mailing list