[PATCH net] net: ti: icssg-prueth: Fix buffer allocation for ICSSG

Simon Horman horms at kernel.org
Tue Jul 8 11:36:20 PDT 2025


On Tue, Jul 08, 2025 at 04:05:16PM +0530, Himanshu Mittal wrote:
> Fixes overlapping buffer allocation for ICSSG peripheral
> used for storing packets to be received/transmitted.
> There are 3 buffers:
> 1. Buffer for Locally Injected Packets
> 2. Buffer for Forwarding Packets
> 3. Buffer for Host Egress Packets
> 
> In existing allocation buffers for 2. and 3. are overlapping
> causing packet corruption.

Hi Himanshu,

I think it would be useful to describe the old layoyt, or otherwise
how the overlap occurs. And contrast that with the new layout.

There was a minimal ASCII diagram of in prueth_emac_buffer_setup().
Perhaps expanding (on) that would be useful.

> 
> Packet corruption observations:
> During tcp iperf testing, due to overlapping buffers the received ack packet
> overwrites the packet to be transmitted. So, we see packets on wire with the
> ack packet content inside the content of next TCP packet from sender device.
> 
> Fixes: abd5576b9c57 ("net: ti: icssg-prueth: Add support for ICSSG switch firmware")
> Signed-off-by: Himanshu Mittal <h-mittal1 at ti.com>

...

> diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c

...

> @@ -297,43 +301,60 @@ static int prueth_fw_offload_buffer_setup(struct prueth_emac *emac)

...

> +	/* Configure buffer pools for Local Injection buffers
> +	 *  - used by firmware to store packets received from host core
> +	 *  - 16 total pools per slice
> +	 */
> +	for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
> +		/* The driver only uses first 4 queues per PRU so only initialize buffer for them */

> +		if ((i % PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE)
> +			 < PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
> +			writel(addr, &bpool_cfg[i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE].addr);
> +			writel(PRUETH_SW_LI_BUF_POOL_SIZE,
> +			       &bpool_cfg[i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE].len);
> +			addr += PRUETH_SW_LI_BUF_POOL_SIZE;
>  		} else {
> -			writel(0, &bpool_cfg[i].addr);
> -			writel(0, &bpool_cfg[i].len);
> +			writel(0, &bpool_cfg[i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE].addr);
> +			writel(0, &bpool_cfg[i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE].len);
>  		}
>  	}

It is still preferred for Networking code to be 80 columns wide of less
unless it affects readability. checkpatch.pl has an option to tell you
about this. And I think that it would be good to apply that guideline
throughout this patch.

E.g. for the for loop above, something like this (completely untested):

	for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
		int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE;

		/* The driver only uses first 4 queues per PRU so only
		 * initialize buffer for them */
		if ((i % PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE)
			 < PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
			writel(addr, &bpool_cfg[cfg_idx].addr);
			writel(PRUETH_SW_LI_BUF_POOL_SIZE,
			       &bpool_cfg[cfg_idx].len);
			addr += PRUETH_SW_LI_BUF_POOL_SIZE;
		} else {
			writel(0, &bpool_cfg[cfg_idx].addr);
			writel(0, &bpool_cfg[cfg_idx].len);
		}
	}

...

> @@ -347,13 +368,13 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)

It's probably out of scope for this patch, being a bug fix.
But it seems to me that there is significant commonality
between prueth_fw_offload_buffer_setup() and prueth_emac_buffer_setup().
It would be nice to consolidate the implementation somehow,
say in a follow-up for net-next.

...

-- 
pw-bot: changes-requested



More information about the linux-arm-kernel mailing list