[PATCH 1/2] iommu/debug: Add IOMMU page table dump debug facility

Qinxin Xia xiaqinxin at huawei.com
Tue Sep 9 19:58:09 PDT 2025



On 2025/9/9 21:06:33, Will Deacon <will at kernel.org> wrote:
> On Thu, Aug 14, 2025 at 05:30:04PM +0800, Qinxin Xia wrote:
>> +/**
>> + * iova_info_dump - dump iova alloced
>> + * @s - file structure used to generate serialized output
>> + * @iovad: - iova domain in question.
>> + */
>> +static int iommu_iova_info_dump(struct seq_file *s, struct iommu_domain *domain)
>> +{
>> +	struct iova_domain *iovad;
>> +	unsigned long long pfn;
>> +	unsigned long i_shift;
>> +	struct rb_node *node;
>> +	unsigned long flags;
>> +	size_t prot_size;
>> +
>> +	iovad = iommu_domain_to_iovad(domain);
>> +	if (!iovad)
>> +		return -ENOMEM;
>> +
>> +	i_shift = iova_shift(iovad);
>> +
>> +	/* Take the lock so that no other thread is manipulating the rbtree */
>> +	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
>> +	assert_spin_locked(&iovad->iova_rbtree_lock);
>> +
>> +	for (node = rb_first(&iovad->rbroot); node; node = rb_next(node)) {
>> +		struct iova *iova = rb_entry(node, struct iova, node);
>> +
>> +		if (iova->pfn_hi <= iova->pfn_lo)
>> +			continue;
>> +
>> +		for (pfn = iova->pfn_lo; pfn <= iova->pfn_hi; ) {
>> +			prot_size = domain->ops->dump_iova_prot(s, domain, pfn << i_shift);
>> +			pfn = ((pfn << i_shift) + prot_size) >> i_shift;
>> +		}
>> +	}
>> +
>> +	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
> 
> Why is the IOVA rbtree lock sufficient for serialising the page-table
> accesses made by ->dump_iova_prot()? I don't see anything here that
> prevents the walker walking into page-table pages that are e.g. being
> freed or manipulated concurrently.
> 
> Will
> 
  Thank you for catching this critical race condition.I will fix this in
  next version. And,Jason suggests putting io_ptdump on top of iommu pt.
  What do you think?



More information about the linux-arm-kernel mailing list