[PATCH v2 0/2] PCI: xgene: Restore working PCIe functionnality

Robin Murphy robin.murphy at arm.com
Tue Mar 22 06:16:35 PDT 2022


On 2022-03-21 20:06, Robin Murphy wrote:
> On 2022-03-21 19:21, Marc Zyngier wrote:
>> On Mon, 21 Mar 2022 18:03:27 +0000,
>> Rob Herring <robh at kernel.org> wrote:
>>>
>>> On Mon, Mar 21, 2022 at 11:36 AM Marc Zyngier <maz at kernel.org> wrote:
>>>>
>>>> On Mon, 21 Mar 2022 15:17:34 +0000,
>>>> Rob Herring <robh at kernel.org> wrote:
>>>>>
>>>>> On Mon, Mar 21, 2022 at 5:49 AM Marc Zyngier <maz at kernel.org> wrote:
>>>>>>
>>>>> For XGene-1, I'd still like to understand what the issue is. Reverting
>>>>> the first fix and fixing 'dma-ranges' should have fixed it. I need a
>>>>> dump of how the IB registers are initialized in both cases. I'm not
>>>>> saying changing 'dma-ranges' in the firmware is going to be required
>>>>> here. There's a couple of other ways we could fix that without a
>>>>> firmware change, but first I need to understand why it broke.
>>>>
>>>> Reverting 6dce5aa59e0b was enough for me, without changing anything
>>>> else.
>>>
>>> Meaning c7a75d07827a didn't matter for you. I'm not sure that it would.
>>>
>>> Can you tell me what 'dma-ranges' contains on your system?
>>
>> Each pcie node (all 5 of them) has:
>>
>> dma-ranges = <0x42000000 0x80 0x00 0x80 0x00 0x00 0x80000000
>>                0x42000000 0x00 0x00 0x00 0x00 0x80 0x00>;
> 
> Hmm, is there anyone other than iommu-dma who actually depends on the 
> resource list being sorted in ascending order of bus address? I recall 
> at the time I pushed for creating the list in sorted order as it was the 
> simplest and most efficient option, but there's no technical reason we 
> couldn't create it in as-found order and defer the sorting until 
> iova_reserve_pci_windows() (at worst that could even operate on a 
> temporary copy if need be). It's just more code, which didn't need to 
> exist without a good reason, but if this is one then exist it certainly 
> may.

Taking a closer look, the Cadence driver is already re-sorting the list
for its own setup, so iommu-dma can't assume the initial sort is
preserved and needs to do its own anyway. Does the (untested) diff below
end up helping X-Gene also?

Robin.

----->8-----
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b22034975301..8ef603c9ca3e 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,6 +20,7 @@
  #include <linux/iommu.h>
  #include <linux/iova.h>
  #include <linux/irq.h>
+#include <linux/list_sort.h>
  #include <linux/mm.h>
  #include <linux/mutex.h>
  #include <linux/pci.h>
@@ -414,6 +415,14 @@ static int cookie_init_hw_msi_region(struct iommu_dma_cookie *cookie,
  	return 0;
  }
  
+static int iommu_dma_ranges_sort(void *priv, const struct list_head *a, const struct list_head *b)
+{
+	struct resource_entry *res_a = list_entry(a, typeof(*res_a), node);
+	struct resource_entry *res_b = list_entry(b, typeof(*res_b), node);
+
+	return res_a->res->start > res_b->res->start;
+}
+
  static int iova_reserve_pci_windows(struct pci_dev *dev,
  		struct iova_domain *iovad)
  {
@@ -432,6 +441,7 @@ static int iova_reserve_pci_windows(struct pci_dev *dev,
  	}
  
  	/* Get reserved DMA windows from host bridge */
+	list_sort(NULL, &bridge->dma_ranges, iommu_dma_ranges_sort);
  	resource_list_for_each_entry(window, &bridge->dma_ranges) {
  		end = window->res->start - window->offset;
  resv_iova:
diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index cb2e8351c2cc..d176b4bc6193 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -393,12 +393,7 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
  			goto failed;
  		}
  
-		/* Keep the resource list sorted */
-		resource_list_for_each_entry(entry, ib_resources)
-			if (entry->res->start > res->start)
-				break;
-
-		pci_add_resource_offset(&entry->node, res,
+		pci_add_resource_offset(ib_resources, res,
  					res->start - range.pci_addr);
  	}
  



More information about the linux-arm-kernel mailing list