[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
Robin Murphy
robin.murphy at arm.com
Mon Feb 2 07:26:25 PST 2026
On 2026-02-02 3:22 pm, Leon Romanovsky wrote:
> On Mon, Feb 02, 2026 at 03:35:48PM +0100, Christoph Hellwig wrote:
>> On Mon, Feb 02, 2026 at 06:27:38PM +0530, Pradeep P V K wrote:
>>> Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
>>> when SWIOTLB bounce buffering becomes active during runtime.
>>>
>>> The issue occurs when SWIOTLB activation changes the device's DMA
>>> mapping requirements at runtime,
>>>
>>> creating a mismatch between
>>> iod->dma_vecs allocation and access logic.
>>>
>>> The problem manifests when:
>>> 1. Device initially operates with dma_skip_sync=true
>>> (coherent DMA assumed)
>>> 2. First SWIOTLB mapping occurs due to DMA address limitations,
>>> memory encryption, or IOMMU bounce buffering requirements
>>> 3. SWIOTLB calls dma_reset_need_sync(), permanently setting
>>> dma_skip_sync=false
>>> 4. Subsequent I/Os now have dma_need_unmap()=true, requiring
>>> iod->dma_vecs
>>
>> I think this patch just papers over the bug.
>
> Agree
>
>> If dma_need_unmap can't be trusted before the dma_map_* call, we've not saved
>> the unmap information and the unmap won't work properly.
>>
>> So we'll need to extend the core code to tell if a mapping
>> will set dma_skip_sync=false before doing the mapping.
>
> There are two paths that lead to SWIOTLB in dma_direct_map_phys().
> The first is is_swiotlb_force_bounce(dev), which dma_need_unmap() can
> easily evaluate. The second is more problematic, as it depends on
> dma_addr and size, neither of which is available in dma_need_unmap():
>
> 102 if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
> 103 dma_kmalloc_needs_bounce(dev, size, dir)) {
> 104 if (is_swiotlb_active(dev))
>
> What about the following change?
>
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 37163eb49f9f..1510b93a8791 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -461,6 +461,8 @@ bool dma_need_unmap(struct device *dev)
> {
> if (!dma_map_direct(dev, get_dma_ops(dev)))
> return true;
> + if (is_swiotlb_force_bounce(dev) || is_swiotlb_active(dev))
> + return true;
This will always be true if a default SWIOTLB buffer exists at all, and
thus pretty much defeat the point of whatever optimisation the caller is
trying to make.
Thanks,
Robin.
> if (!dev->dma_skip_sync)
> return true;
> return IS_ENABLED(CONFIG_DMA_API_DEBUG);
>
More information about the Linux-nvme
mailing list