WARNING: drivers/iommu/io-pgtable-arm.c:639

Keith Busch kbusch at kernel.org
Tue Dec 9 20:05:41 PST 2025


On Wed, Dec 10, 2025 at 02:30:50AM +0000, Chaitanya Kulkarni wrote:
> @@ -126,17 +126,26 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev,
>   		error = dma_iova_link(dma_dev, state, vec->paddr, mapped,
>   				vec->len, dir, attrs);
>   		if (error)
> -			break;
> +			goto out_unlink;
>   		mapped += vec->len;
>   	} while (blk_map_iter_next(req, &iter->iter, vec));
>   
>   	error = dma_iova_sync(dma_dev, state, 0, mapped);
> -	if (error) {
> -		iter->status = errno_to_blk_status(error);
> -		return false;
> -	}
> +	if (error)
> +		goto out_unlink;
>   
>   	return true;
> +
> +out_unlink:
> +	/*
> +	 * Unlink any partial mapping to avoid unmap mismatch later.
> +	 * If we mapped some bytes but not all, we must clean up now
> +	 * to prevent attempting to unmap more than was actually mapped.
> +	 */
> +	if (mapped)
> +		dma_iova_unlink(dma_dev, state, 0, mapped, dir, attrs);
> +	iter->status = errno_to_blk_status(error);
> +	return false;
>   }

It does look like a bug to continue on when dma_iova_link() fails as the
caller thinks the entire mapping was successful, but I think you also
need to call dma_iova_free() to undo the earlier dma_iova_try_alloc(),
otherwise iova space is leaked.

I'm a bit doubtful this error condition was hit though: this sequence
is largely the same as it was in v6.18 before the regression. The only
difference since then should just be for handling P2P DMA across a host
bridge, which I don't think applies to the reported bug since that's a
pretty unusual thing to do.



More information about the Linux-nvme mailing list