[PATCH RFC net-next 2/3] net: stmmac: Activate Inband/PCS flag based on the selected iface

Serge Semin fancer.lancer at gmail.com
Sun Jun 2 16:25:18 PDT 2024


Hi Russel

On Fri, May 31, 2024 at 08:30:27PM +0100, Russell King (Oracle) wrote:
> Hi Serge,
> 
> Thanks for the reply. I've attempted to deal with most of these in my
> v2 posting, but maybe not in the best way yet.

I've got your v2 series. I'll have a look at it and test it out later
on the next week, sometime around Wednesday.

> 
> On Fri, May 31, 2024 at 08:13:49PM +0300, Serge Semin wrote:
> > > Does this
> > > mean it is true that these cores will never be used with an external
> > > PCS?
> > 
> > Sorry, I was wrong to suggest the (priv->plat.has_gmac ||
> > priv->plat.has_gmac4)-based statement. Indeed there is a case of having DW
> > QoS Eth and DW XPCS synthesized together with the SGMII/1000Base-X
> > downstream interface. Not sure why it was needed to implement that way
> > seeing DW QoS Eth IP-core supports optional SGMII PHY interface out of
> > box, but AFAICS Intel mGBE is that case. Anyway the correct way to
> > detect the internal PCS support is to check the PCSSEL flag set in the
> > HWFEATURE register (preserved in the stmmac_priv::dma_cap::pcs field).
> 
> We can only wonder why!
> 
> > > Please can you confirm that if an external PCS (e.g. xpcs, lynx PCS)
> > > is being used, the internal PCS will not have been synthesized, and
> > > thus priv->dma_cap.pcs will be false?
> > 
> > Alas I can't confirm that. priv->dma_cap.pcs only indicates the
> > internal PCS availability. External PCS is an independent entity from
> > the DW *MAC IP-core point of view. So the DW GMAC/QoS Eth/XGMAC
> > controllers aren't aware of its existence. It's the low-level platform
> > driver/code responsibility to somehow detect it being available
> > ("pcs-handle" property, plat->mdio_bus_data->has_xpcs flag, etc).
> > 
> > Regarding the internal PCS, as long as the DW GMAC or DW QoS Eth is
> > synthesized with the SGMII/TBI/RTBI PHY interface support
> > priv->dma_cap.pcs will get to be true. Note the device can be
> > synthesized with several PHY interfaces supported. As long as
> > SGMII/TBI/RTBI PHY interface is any of them, the flag will be set
> > irrespective from the PHY interface activated at runtime. 
> 
> I've been debating about this, and given your response, I'm wondering
> whether we should change stmmac_mac_select_pcs() to instead do:
> 
> static struct phylink_pcs *stmmac_mac_select_pcs(struct phylink_config *config,
> 						 phy_interface_t interface)
> {
> 	struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
> 	struct phylink_pcs *pcs;
> 
> 	if (priv->plat->select_pcs) {
> 		pcs = priv->plat->select_pcs(priv, interface);
> 		if (!IS_ERR(pcs))
> 			return pcs;
> 	}
> 
> 	return stmmac_mac_phylink_select_pcs(priv, interface);
> }
> 
> and push the problem of whether to provide a PCS that overrides
> the MAC internal PCS into platform code. That would mean Intel mGBE
> would be able to override with XPCS. rzn1 and socfpga can then do
> their own thing as well.

Well, AFAICS the only device that currently have the DW XPCS
connected to a non DW XGMAC controller is indeed the Intel mGBE with its
DW QoS Eth+DW XPCS weird setup. At the same time the Intel mGBE
controller can also support RGMII interface. Thus there is no internal
SGMII/TBI/RTBI PCS in there.

Qualcomm QoS Eth uses the internal SGMII PCS and by setting up the
STMMAC_FLAG_HAS_INTEGRATED_PCS flag its driver almost completely
disables the STMMAC PCS functionality (except the
stmmac_pcs_ctrl_ane() being called in stmmac_hw_setup()).

So from the perspective of these two devices the PCS selection looks
quiet certain. It's either internal or external one. There is no
device with both of them available.

SoCFPGA... Well, it's another and more complicated story. Based on
what said in a comment in
socfpga_gen5_set_phy_mode()/socfpga_gen10_set_phy_mode() the only
possibility to have some internal interface converted to the external
one is when a "splitter" is available. But IMO the comment is
misleading because the only thing that is then done with the
"splitter" CSR is just the clock divider selection. What is actually
done, if the "splitter" is available or if the SGMII/GMII/MII
_MAC-interface_ is requested, then the internal interface is fixed to
"GMII/MII". It looks weird because based on the "mac-mode" DT-property
semantics it was supposed to indicate the internal interface only. But
EMAC is never tuned to have the SGMII interface (see the values saved
in the "*val" argument of socfpga_set_phy_mode_common()). So all of
that makes me to conclude the next points:

1. "mac-mode" property has never been utilized for the SoG-FPGA GMAC
platform. The plat_stmmacenet_data::mac_interface field has always
defaulted to plat_stmmacenet_data::phy_interface.

2. SoG-FPGA GMAC IP-core itself doesn't support the native/internal
SGMII interface.  It's implemented by means of the so called
"gmii-to-sgmii"-converter, which is the Lynx PCS.

Thus unless I've missed something the SoC-FPGA network controller
structure can be depicted as follows:


                   +---- SYSMGR:PHYSEL
      phy_intf_sel |
+------------------+                     +--------------+ 
|          RMII    |                     |              | Internal Interface
|       +----------+                     |          off +--------------------------+
|          RGMII   | Internal Interface  | SGMII        |                          | External Interface*
| EMAC  +----------+---------------------+              |          +-------+       +--------+-----------
|         GMII/MII |                     | adapter      | GMII/MII | Lynx  | SGMII |        |
|       +----------+                     |           on +----------+       +-------+        |
|                  +--+                  |              |          |  PCS  |                |
+------------------+  |                  +--------------+          +---+---+                |
                      |                                                |                    |
                      |              +------------+                    |                    |
                      +--------------+ Splitter** +--------------------+--------------------+
                                     +-----+------+
                                           |
                                     +-----+------+
                                     | Oscillator |
                                     +------------+

* No idea whether the external interface is represented as a single IO
port or as multiple interface ports handled by the same MAC.

** As I explained above, judging by the SoC-FPGA driver code the
"splitter" is just the reference clock divider responsible for the
clock rate adjustment based on the requested link speed.

Based on the logic depicted on the sketch above, I guess that there is
no internal SGMII/TBI/RTBI PCS in SoC-FPGA GMAC either. The SGMII
interface is implemented by means of the Lynx PCS.

> 
> I'm trying hard not to go down another rabbit hole... I've just
> spotted that socfpga sets mac_interface to PHY_INTERFACE_MODE_SGMII.
> That's another reason for pushing this down into platform drivers -
> if platform drivers are doing weird stuff, then we can contain their
> weirdness in the platform drivers moving it out of the core code.

Oh, that damn "mac-mode" property... First of all as I already
mentioned once AFAICS originally it was introduced for the SoC-FPGA
GMAC, but the property has never been defined in any DT-node so far,
neither in SoC-FPGA nodes nor in the rest of the DW *MAC-based nodes.
Moreover based on my consideration above the SoC-FPGA internal
interface is always determined based on the external one seeing
plat_stmmacenet_data::mac_interface defaults to
plat_stmmacenet_data::phy_interface. Secondly I also have much
certainty that the rest of the glue drivers utilizing
plat_stmmacenet_data::mac_interface field should in fact be using
plat_stmmacenet_data::phy_interface instead. Based on the history of
the mac_interface-related changes it's likely that all of them have
just either been missed during the conversion to utilizing the
phy_interface-field or incorrectly utilized the mac_interface field
instead of phy_interface in the first place.

So to speak before going further it might be worth re-checking once
again the entire history of the "mac-mode" property-related change,
but as an experimental A/B-test patch for net-next it may be a good
idea to either drop the mac_interface field completely, or convert the
driver to forgetting about the internal PCS if the external one is
enabled, or, as a less invasive option, make SoC-FPGA explicitly
setting up the mac_interface field to GMII/MII if it configures the
internal interface to that value. Then, if these changes don't break
any platform (most importantly the SoF-FPGA GMAC case), then we can go
further and carefully convert the rest of the glue-drivers not using
the mac_interface field.

> 
> > You can extend the priv->dma_cap.pcs flag semantics. So it could
> > be indicating three types of the PCS'es:
> > RGMII, SGMII, XPCS (or TBI/RTBI in future).
> 
> If TBI/RTBI gets supported, then this would have to be extended, but I
> get the impression that this isn't popular.

Irrespective from the TBI/RTBI interface support, using the
priv->dma_cap.pcs field for all possible PCS'es shall also improve the
code readability. Currently we have four versions of the PCS fields:
dma_features::pcs
mac_device_info::pcs
mac_device_info::xpcs
mac_device_info::lynx_pcs
which are being checked here and there in the driver...

> 
> > I guess the DW XPCS implementation might be more preferable. From one
> > side DW XPCS SGMII can support up to 2.5Gbps speed, while the DW
> > GMAC/QoS Eth SGMII can work with up to 1Gbps speed only. On the other
> > hand the DW XPCS might be available over the MDIO-bus, which is slower
> > to access than the internal PCS CSRs available in the DW GMAC/QoS Eth
> > CSRs space. So the more performant link speed seems more useful
> > feature over the faster device setup process.
> 
> I think which should be used would depend on how the hardware is wired
> up. This brings us back to platform specifics again, which points
> towards moving the decision making into platform code as per the above.
> 
> > One thing I am not sure about is that there is a real case of having
> > the DW GMAC/QoS Eth synthesized with the native SGMII/TBI/RTBI PHY
> > interface support and being attached to the DW XPCS controller, which
> > would have the SGMII downstream PHY interface. DW XPCS has only the
> > XGMII or GMII/MII upstream interfaces over which the MAC can be
> > attached.
> 
> That gives us another possibility, but needs platforms to be doing
> the right thing. If mac_interface were set to XGMII or GMII/MII, then
> that would exclude the internal MAC PCS.
> 
> > So DW GMAC/QoS Eth and DW XPCS can be connected via the
> > GMII/MII interface only. Regarding Intel mGBE, it likely is having a
> > setup like this:
> > 
> > +------------+          +---------+
> > |            | GMII/MII |         |   SGMII
> > | DW QoS Eth +----------+ DW XPCS +------------
> > |            |          |         | 1000Base-X
> > +------------+          +---------+
> 
> 
> So as an alternative, 
> 
>      mac_interface            phy_interface
> 
>      XGMII/GMII/MII           SGMII/1000Base-X
> MAC ---------------- DW XPCS ------------------
> 
>      INTERNAL                SGMII/TBI/RTBI
> MAC ---------- Internal PCS ----------------
> 
>      INTERNAL                  RGMII
> MAC ---------- Internal "PCS" --------------

+ SoC-FPGA (presumably)

       GMII/MII                  SGMII
  MAC ---------------- Lynx PCS --------------

Please also note, based on the DW GMAC/QoS Eth hardware manual each
internal interface block is connected to MAC by the GMII/MII
interface. So the internal PCS cases more precisely could be
represented as follows:

       GMII                     SGMII (AN)
  MAC ---------- Internal PCS ------------------

       GMII                     TBI/RTBI (AN)
  MAC ---------- Internal PCS ------------------
  
       GMII                      RGMII (In-band)
  MAC ---------- Internal "PCS" ----------------

       GMII                      RevMII
  MAC ----------  RevMII block  ----------------

       GMII                      GMII
  MAC ------------------------------------------

       MII                       SMII (In-band)
  MAC ---------- Internal "PCS" ----------------

       MII                       RMII
  MAC ----------   RMII block   ----------------

       MII                       MII
  MAC ------------------------------------------

There is a special input signal phy_intf_sel[2:0], which tells to MAC
what interface to activate (grep -i the glue drivers for "intf",
"physel", etc).

> 
> One of the problems here, though, is socfpga. It uses mac_interface
> with RGMII*, MII, GMII, SGMII and RMII. I think it's confusing
> mac_interface for phy_interface, but I haven't read through enough
> of it to be certain.
> 
> So that again leads me back to my proposal above for
> stmmac_mac_select_pcs() as the least likely to break proposition -
> at least given how things are at the moment.

Please see my notes above regarding the internal interface
initialization in the SoC-FPGA glue driver. I guess we could at least
try to A/B-test the SoC-FPGA code in the next net-next by setting
mac_interface to GMII/MII when the internal interface is enabled as
GMII/MII in the glue-driver, and converting the rest of the glue
driver to using phy_interface. If nothing breaks, then SoC-FPGA has
never used the "mac-mode" property and we could mark the property as
deprecated and could carefully covert the rest of the STMMAC platform
driver to using the phy_interface field.

-Serge(y)

> 
> -- 
> RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
> FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!



More information about the linux-arm-kernel mailing list