[PATCH v7 net-next 2/2] net: airoha: Introduce ethernet support for EN7581 SoC

Lorenzo Bianconi lorenzo at kernel.org
Fri Jul 12 06:47:38 PDT 2024


> On Wed, 10 Jul 2024 10:47:41 +0200 Lorenzo Bianconi wrote:
> > Add airoha_eth driver in order to introduce ethernet support for
> > Airoha EN7581 SoC available on EN7581 development board (en7581-evb).
> > en7581-evb networking architecture is composed by airoha_eth as mac
> > controller (cpu port) and a mt7530 dsa based switch.
> > EN7581 mac controller is mainly composed by Frame Engine (FE) and
> > QoS-DMA (QDMA) modules. FE is used for traffic offloading (just basic
> > functionalities are supported now) while QDMA is used for DMA operation
> > and QOS functionalities between mac layer and the dsa switch (hw QoS is
> > not available yet and it will be added in the future).
> > Currently only hw lan features are available, hw wan will be added with
> > subsequent patches.
> 
> It seems a bit unusual for DSA to have multiple ports, isn't it?
> Can you explain how this driver differs from normal DSA a little 
> more in the commit message?

The Airoha eth SoC architecture is similar to mtk_eth_soc one (e.g MT7988a).
The FrameEngine (FE) module has multiple GDM ports that are connected to
different blocks. Current airoha_eth driver supports just GDM1 that is connected
to a MT7530 DSA switch (I have not posted a tiny patch for mt7530 driver yet).
In the future we will support even GDM{2,3,4} that will connect to differ
phy modues (e.g. 2.5Gbps phy).

> 
> > +static void airoha_dev_get_stats64(struct net_device *dev,
> > +				   struct rtnl_link_stats64 *storage)
> > +{
> > +	struct airoha_gdm_port *port = netdev_priv(dev);
> > +
> > +	mutex_lock(&port->stats.mutex);
> 
> can't take sleeping locks here :( Gotta do periodic updates from 
> a workqueue or spinlock. there are callers under RCU (annoyingly)

ack, I will fix it in v8

> 
> > +	airoha_update_hw_stats(port);
> > +	storage->rx_packets = port->stats.rx_ok_pkts;
> > +	storage->tx_packets = port->stats.tx_ok_pkts;
> > +	storage->rx_bytes = port->stats.rx_ok_bytes;
> > +	storage->tx_bytes = port->stats.tx_ok_bytes;
> > +	storage->multicast = port->stats.rx_multicast;
> > +	storage->rx_errors = port->stats.rx_errors;
> > +	storage->rx_dropped = port->stats.rx_drops;
> > +	storage->tx_dropped = port->stats.tx_drops;
> > +	storage->rx_crc_errors = port->stats.rx_crc_error;
> > +	storage->rx_over_errors = port->stats.rx_over_errors;
> > +
> > +	mutex_unlock(&port->stats.mutex);
> > +}
> > +
> > +static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,
> > +				   struct net_device *dev)
> > +{
> > +	struct skb_shared_info *sinfo = skb_shinfo(skb);
> > +	struct airoha_gdm_port *port = netdev_priv(dev);
> > +	int i, qid = skb_get_queue_mapping(skb);
> > +	struct airoha_eth *eth = port->eth;
> > +	u32 nr_frags, msg0 = 0, msg1;
> > +	u32 len = skb_headlen(skb);
> > +	struct netdev_queue *txq;
> > +	struct airoha_queue *q;
> > +	void *data = skb->data;
> > +	u16 index;
> > +	u8 fport;
> > +
> > +	if (skb->ip_summed == CHECKSUM_PARTIAL)
> > +		msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TCO_MASK, 1) |
> > +			FIELD_PREP(QDMA_ETH_TXMSG_UCO_MASK, 1) |
> > +			FIELD_PREP(QDMA_ETH_TXMSG_ICO_MASK, 1);
> > +
> > +	/* TSO: fill MSS info in tcp checksum field */
> > +	if (skb_is_gso(skb)) {
> > +		if (skb_cow_head(skb, 0))
> > +			goto error;
> > +
> > +		if (sinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
> > +			__be16 csum = cpu_to_be16(sinfo->gso_size);
> > +
> > +			tcp_hdr(skb)->check = (__force __sum16)csum;
> > +			msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TSO_MASK, 1);
> > +		}
> > +	}
> > +
> > +	fport = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id;
> > +	msg1 = FIELD_PREP(QDMA_ETH_TXMSG_FPORT_MASK, fport) |
> > +	       FIELD_PREP(QDMA_ETH_TXMSG_METER_MASK, 0x7f);
> > +
> > +	if (WARN_ON_ONCE(qid >= ARRAY_SIZE(eth->q_tx)))
> > +		qid = 0;
> 
> Hm, how? Stack should protect against that.

ack, I will fix it in v8

> 
> > +	q = &eth->q_tx[qid];
> > +	if (WARN_ON_ONCE(!q->ndesc))
> > +		goto error;
> > +
> > +	spin_lock_bh(&q->lock);
> > +
> > +	nr_frags = 1 + sinfo->nr_frags;
> > +	if (q->queued + nr_frags > q->ndesc) {
> > +		/* not enough space in the queue */
> > +		spin_unlock_bh(&q->lock);
> > +		return NETDEV_TX_BUSY;
> 
> no need to stop the queue?

reviewing this chunk, I guess we can get rid of it since we already block the
txq at the end of airoha_dev_xmit() if we do not have enough space for the next
packet:

	if (q->ndesc - q->queued < q->free_thr)
		netif_tx_stop_queue(txq);

> 
> > +	}
> > +
> > +	index = q->head;
> > +	for (i = 0; i < nr_frags; i++) {
> > +		struct airoha_qdma_desc *desc = &q->desc[index];
> > +		struct airoha_queue_entry *e = &q->entry[index];
> > +		skb_frag_t *frag = &sinfo->frags[i];
> > +		dma_addr_t addr;
> > +		u32 val;
> > +
> > +		addr = dma_map_single(dev->dev.parent, data, len,
> > +				      DMA_TO_DEVICE);
> > +		if (unlikely(dma_mapping_error(dev->dev.parent, addr)))
> > +			goto error_unmap;
> > +
> > +		index = (index + 1) % q->ndesc;
> > +
> > +		val = FIELD_PREP(QDMA_DESC_LEN_MASK, len);
> > +		if (i < nr_frags - 1)
> > +			val |= FIELD_PREP(QDMA_DESC_MORE_MASK, 1);
> > +		WRITE_ONCE(desc->ctrl, cpu_to_le32(val));
> > +		WRITE_ONCE(desc->addr, cpu_to_le32(addr));
> > +		val = FIELD_PREP(QDMA_DESC_NEXT_ID_MASK, index);
> > +		WRITE_ONCE(desc->data, cpu_to_le32(val));
> > +		WRITE_ONCE(desc->msg0, cpu_to_le32(msg0));
> > +		WRITE_ONCE(desc->msg1, cpu_to_le32(msg1));
> > +		WRITE_ONCE(desc->msg2, cpu_to_le32(0xffff));
> > +
> > +		e->skb = i ? NULL : skb;
> > +		e->dma_addr = addr;
> > +		e->dma_len = len;
> > +
> > +		airoha_qdma_rmw(eth, REG_TX_CPU_IDX(qid), TX_RING_CPU_IDX_MASK,
> > +				FIELD_PREP(TX_RING_CPU_IDX_MASK, index));
> > +
> > +		data = skb_frag_address(frag);
> > +		len = skb_frag_size(frag);
> > +	}
> > +
> > +	q->head = index;
> > +	q->queued += i;
> > +
> > +	txq = netdev_get_tx_queue(dev, qid);
> > +	netdev_tx_sent_queue(txq, skb->len);
> > +	skb_tx_timestamp(skb);
> > +
> > +	if (q->ndesc - q->queued < q->free_thr)
> > +		netif_tx_stop_queue(txq);
> > +
> > +	spin_unlock_bh(&q->lock);
> > +
> > +	return NETDEV_TX_OK;
> > +
> > +error_unmap:
> > +	for (i--; i >= 0; i++)
> > +		dma_unmap_single(dev->dev.parent, q->entry[i].dma_addr,
> > +				 q->entry[i].dma_len, DMA_TO_DEVICE);
> > +
> > +	spin_unlock_bh(&q->lock);
> > +error:
> > +	dev_kfree_skb_any(skb);
> > +	dev->stats.tx_dropped++;
> > +
> > +	return NETDEV_TX_OK;
> > +}
> > +
> > +static const char * const airoha_ethtool_stats_name[] = {
> > +	"tx_eth_bc_cnt",
> > +	"tx_eth_mc_cnt",
> 
> struct ethtool_eth_mac_stats
> 
> > +	"tx_eth_lt64_cnt",
> > +	"tx_eth_eq64_cnt",
> > +	"tx_eth_65_127_cnt",
> > +	"tx_eth_128_255_cnt",
> > +	"tx_eth_256_511_cnt",
> > +	"tx_eth_512_1023_cnt",
> > +	"tx_eth_1024_1518_cnt",
> > +	"tx_eth_gt1518_cnt",
> 
> struct ethtool_rmon_stats 
> 
> > +	"rx_eth_bc_cnt",
> > +	"rx_eth_frag_cnt",
> > +	"rx_eth_jabber_cnt",
> 
> those are also covered.. somewhere..

ack, I guess we can use .get_eth_mac_stats() and .get_rmon_stats() callbacks
and get rid of .get_ethtool_stats() one since it will duplicate stats
otherwise.

> 
> > +	"rx_eth_fc_drops",
> > +	"rx_eth_rc_drops",
> > +	"rx_eth_lt64_cnt",
> > +	"rx_eth_eq64_cnt",
> > +	"rx_eth_65_127_cnt",
> > +	"rx_eth_128_255_cnt",
> > +	"rx_eth_256_511_cnt",
> > +	"rx_eth_512_1023_cnt",
> > +	"rx_eth_1024_1518_cnt",
> > +	"rx_eth_gt1518_cnt",
> > +};
> > +
> > +static void airoha_ethtool_get_drvinfo(struct net_device *dev,
> > +				       struct ethtool_drvinfo *info)
> > +{
> > +	struct airoha_gdm_port *port = netdev_priv(dev);
> > +	struct airoha_eth *eth = port->eth;
> > +
> > +	strscpy(info->driver, eth->dev->driver->name, sizeof(info->driver));
> > +	strscpy(info->bus_info, dev_name(eth->dev), sizeof(info->bus_info));
> > +	info->n_stats = ARRAY_SIZE(airoha_ethtool_stats_name) +
> > +			page_pool_ethtool_stats_get_count();
> > +}
> > +
> > +static void airoha_ethtool_get_strings(struct net_device *dev, u32 sset,
> > +				       u8 *data)
> > +{
> > +	int i;
> > +
> > +	if (sset != ETH_SS_STATS)
> > +		return;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(airoha_ethtool_stats_name); i++)
> > +		ethtool_puts(&data, airoha_ethtool_stats_name[i]);
> > +
> > +	page_pool_ethtool_stats_get_strings(data);
> > +}
> > +
> > +static int airoha_ethtool_get_sset_count(struct net_device *dev, int sset)
> > +{
> > +	if (sset != ETH_SS_STATS)
> > +		return -EOPNOTSUPP;
> > +
> > +	return ARRAY_SIZE(airoha_ethtool_stats_name) +
> > +	       page_pool_ethtool_stats_get_count();
> > +}
> > +
> > +static void airoha_ethtool_get_stats(struct net_device *dev,
> > +				     struct ethtool_stats *stats, u64 *data)
> > +{
> > +	int off = offsetof(struct airoha_hw_stats, tx_broadcast) / sizeof(u64);
> > +	struct airoha_gdm_port *port = netdev_priv(dev);
> > +	u64 *hw_stats = (u64 *)&port->stats + off;
> > +	struct page_pool_stats pp_stats = {};
> > +	struct airoha_eth *eth = port->eth;
> > +	int i;
> > +
> > +	BUILD_BUG_ON(ARRAY_SIZE(airoha_ethtool_stats_name) + off !=
> > +		     sizeof(struct airoha_hw_stats) / sizeof(u64));
> > +
> > +	mutex_lock(&port->stats.mutex);
> > +
> > +	airoha_update_hw_stats(port);
> > +	for (i = 0; i < ARRAY_SIZE(airoha_ethtool_stats_name); i++)
> > +		*data++ = hw_stats[i];
> > +
> > +	for (i = 0; i < ARRAY_SIZE(eth->q_rx); i++) {
> > +		if (!eth->q_rx[i].ndesc)
> > +			continue;
> > +
> > +		page_pool_get_stats(eth->q_rx[i].page_pool, &pp_stats);
> > +	}
> > +	page_pool_ethtool_stats_get(data, &pp_stats);
> 
> We can't use the netlink stats because of the shared DMA / shared
> device? mlxsw had a similar problem recently:
> 
> https://lore.kernel.org/all/20240625120807.1165581-1-amcohen@nvidia.com/
> 
> Can we list the stats without a netdev or add a new netlink attr
> to describe such devices? BTW setting pp->netdev to an unregistered
> device is probably a bad idea, we should add a WARN_ON() for that
> if we don't have one 😮️

ack, we have the same architecture here, rx queues/page_pools are shared
between net_devices. So far I used page_pool stats just for debugging so
I will remove them for the moment till we have a good/defined solution
for this kind of architecture.

> 
> > +	for_each_child_of_node(pdev->dev.of_node, np) {
> > +		if (!of_device_is_compatible(np, "airoha,eth-mac"))
> > +			continue;
> > +
> > +		if (!of_device_is_available(np))
> > +			continue;
> > +
> > +		err = airoha_alloc_gdm_port(eth, np);
> > +		if (err) {
> > +			of_node_put(np);
> > +			goto error;
> > +		}
> > +	}
> > +
> > +	airoha_qdma_start_napi(eth);
> 
> Are you sure starting the NAPI after registration is safe?
> Nothing will go wrong if interface gets opened before
> airoha_qdma_start_napi() gets to run?
> 
> Also you don't seem to call napi_disable(), NAPI can reschedule itself,
> and netif_napi_del() doesn't wait for quiescence.

ack, I will fix it in v8.

Regards,
Lorenzo
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 228 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20240712/e27cbbbf/attachment.sig>


More information about the linux-arm-kernel mailing list