Allocating more RX descriptors that can fit in their related rings

Remi Pommarel repk at triplefau.lt
Wed Sep 4 11:01:50 PDT 2024


Hello,

As far as I understand a bunch (ATH12K_RX_DESC_COUNT) of rx descriptors
gets allocated, then CMEM is configured for those descriptors cookie
conversion and is kept available in dp->rx_desc_free_list pool.

Those descriptors seem to be used to fed two different rings, the
rx_refill_buf_ring ring via ath12k_dp_rx_bufs_replenish() and the
reo_reinject_ring one with ath12k_dp_rx_h_defrag_reo_reinject(). While
the former is kept fully used if possible the latter is only used on
demand (i.e. reinjection of defragmented MPDU).

It seems that the number of RX descriptors ATH12K_RX_DESC_COUNT (12288)
is higher than what those two rings can fit (DP_REO_REINJECT_RING_SIZE +
DP_RXDMA_BUF_RING_SIZE = 4096 + 32 = 4128).

My question is why are we allocating that much (12288) buffer if only a
small part (4128) can be used in worst case ?

Wouldn't it be ok to only allocate just enough RX descriptors to fill
both ring (with proper 512 alignment to ease CMEM configuration) as
below ?

 #define ATH12K_RX_DESC_COUNT   ALIGN(DP_REO_REINJECT_RING_SIZE + \
                                      DP_RXDMA_BUF_RING_SIZE, \
                                      ATH12K_MAX_SPT_ENTRIES)

Or am I missing something and this is going to impact performances ?

Thanks

-- 
Remi



More information about the ath12k mailing list