[PATCH V2 2/4] dmaengine: xilinx_dma: Fix irq handler and start transfer path for AXI DMA
Pandey, Radhey Shyam
radhey.shyam.pandey at amd.com
Tue Jul 15 04:05:12 PDT 2025
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Suraj Gupta <suraj.gupta2 at amd.com>
> Sent: Thursday, July 10, 2025 3:42 PM
> To: andrew+netdev at lunn.ch; davem at davemloft.net; kuba at kernel.org;
> pabeni at redhat.com; Simek, Michal <michal.simek at amd.com>; vkoul at kernel.org;
> Pandey, Radhey Shyam <radhey.shyam.pandey at amd.com>
> Cc: netdev at vger.kernel.org; linux-arm-kernel at lists.infradead.org; linux-
> kernel at vger.kernel.org; dmaengine at vger.kernel.org; Katakam, Harini
> <harini.katakam at amd.com>
> Subject: [PATCH V2 2/4] dmaengine: xilinx_dma: Fix irq handler and start transfer
> path for AXI DMA
Mention - a short summary of what you are fixing. i.e allow queuing up multiple
DMA transaction in running state or something like that.
>
> AXI DMA driver incorrectly assumes complete transfer completion upon IRQ
> reception, particularly problematic when IRQ coalescing is active.
> Updating the tail pointer dynamically fixes it.
> Remove existing idle state validation in the beginning of
> xilinx_dma_start_transfer() as it blocks valid transfer initiation on busy channels with
> queued descriptors.
> Additionally, refactor xilinx_dma_start_transfer() to consolidate coalesce and delay
> configurations while conditionally starting channels only when idle.
These two refactor should be in separate patch and as it in optimizing existing flow.
>
> Signed-off-by: Suraj Gupta <suraj.gupta2 at amd.com>
> Fixes: Fixes: c0bba3a99f07 ("dmaengine: vdma: Add Support for Xilinx AXI Direct
> Memory Access Engine")
Not a real bug but an implementation from the start and subsequent queue
we're not allowed on running DMA. Same is the case for other DMA variants.
> ---
> drivers/dma/xilinx/xilinx_dma.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index
> a34d8f0ceed8..187749b7b8a6 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1548,9 +1548,6 @@ static void xilinx_dma_start_transfer(struct
> xilinx_dma_chan *chan)
> if (list_empty(&chan->pending_list))
> return;
>
> - if (!chan->idle)
> - return;
> -
> head_desc = list_first_entry(&chan->pending_list,
> struct xilinx_dma_tx_descriptor, node);
> tail_desc = list_last_entry(&chan->pending_list,
> @@ -1558,23 +1555,24 @@ static void xilinx_dma_start_transfer(struct
> xilinx_dma_chan *chan)
> tail_segment = list_last_entry(&tail_desc->segments,
> struct xilinx_axidma_tx_segment, node);
>
> + if (chan->has_sg && list_empty(&chan->active_list))
Can also use chan-> idle equivalent of empty active list?
But is fine as is also.
> + xilinx_write(chan, XILINX_DMA_REG_CURDESC,
> + head_desc->async_tx.phys);
> +
> reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
>
> if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
> reg &= ~XILINX_DMA_CR_COALESCE_MAX;
> reg |= chan->desc_pendingcount <<
> XILINX_DMA_CR_COALESCE_SHIFT;
> - dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg);
This seems to unrelated change. Please consider optimization to separate
patch.
> }
>
> - if (chan->has_sg)
> - xilinx_write(chan, XILINX_DMA_REG_CURDESC,
> - head_desc->async_tx.phys);
> reg &= ~XILINX_DMA_CR_DELAY_MAX;
> reg |= chan->irq_delay << XILINX_DMA_CR_DELAY_SHIFT;
> dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg);
>
> - xilinx_dma_start(chan);
> + if (chan->idle)
> + xilinx_dma_start(chan);
Same as above.
>
> if (chan->err)
> return;
> @@ -1914,8 +1912,10 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void
> *data)
> XILINX_DMA_DMASR_DLY_CNT_IRQ)) {
> spin_lock(&chan->lock);
> xilinx_dma_complete_descriptor(chan);
> - chan->idle = true;
> - chan->start_transfer(chan);
> + if (list_empty(&chan->active_list)) {
> + chan->idle = true;
> + chan->start_transfer(chan);
> + }
> spin_unlock(&chan->lock);
> }
>
> --
> 2.25.1
More information about the linux-arm-kernel
mailing list