[RFC V2 0/2] arm64: imx8mm: Enable Hantro VPUs
Lucas Stach
l.stach at pengutronix.de
Wed Dec 1 11:04:54 PST 2021
Am Mittwoch, dem 01.12.2021 um 12:52 -0600 schrieb Adam Ford:
> On Wed, Dec 1, 2021 at 12:37 PM Lucas Stach <l.stach at pengutronix.de> wrote:
> >
> > Am Mittwoch, dem 01.12.2021 um 10:16 -0800 schrieb Tim Harvey:
> > > On Wed, Dec 1, 2021 at 9:32 AM Lucas Stach <l.stach at pengutronix.de> wrote:
> > > >
> > > > Hi Tim,
> > > >
> > > > Am Mittwoch, dem 01.12.2021 um 09:23 -0800 schrieb Tim Harvey:
> > > > > On Tue, Nov 30, 2021 at 5:33 PM Adam Ford <aford173 at gmail.com> wrote:
> > > > > >
> > > > > > The i.MX8M has two Hantro video decoders, called G1 and G2 which appear
> > > > > > to be related to the video decoders used on the i.MX8MQ, but because of
> > > > > > how the Mini handles the power domains, the VPU driver does not need to
> > > > > > handle all the functions, nor does it support the post-processor,
> > > > > > so a new compatible flag is required.
> > > > > >
> > > > > > With the suggestion from Hans Verkuil, I was able to get the G2 splat to go away
> > > > > > with changes to FORCE_MAX_ZONEORDER, but I found I could also set cma=512M, however
> > > > > > it's unclear to me if that's an acceptable alternative.
> > > > > >
> > > > > > At the suggestion of Ezequiel Garcia and Nicolas Dufresne I have some
> > > > > > results from Fluster. However, the G2 VPU appears to fail most tests.
> > > > > >
> > > > > > ./fluster.py run -dGStreamer-H.264-V4L2SL-Gst1.0
> > > > > > Ran 90/135 tests successfully in 76.431 secs
> > > > > >
> > > > > > ./fluster.py run -d GStreamer-VP8-V4L2SL-Gst1.0
> > > > > > Ran 55/61 tests successfully in 21.454 secs
> > > > > >
> > > > > > ./fluster.py run -d GStreamer-VP9-V4L2SL-Gst1.0
> > > > > > Ran 0/303 tests successfully in 20.016 secs
> > > > > >
> > > > > > Each day seems to show more and more G2 submissions, and gstreamer seems to be
> > > > > > still working on the VP9, so I am not sure if I should drop G2 as well.
> > > > > >
> > > > > >
> > > > > > Adam Ford (2):
> > > > > > media: hantro: Add support for i.MX8M Mini
> > > > > > arm64: dts: imx8mm: Enable VPU-G1 and VPU-G2
> > > > > >
> > > > > > arch/arm64/boot/dts/freescale/imx8mm.dtsi | 41 +++++++++++++++
> > > > > > drivers/staging/media/hantro/hantro_drv.c | 2 +
> > > > > > drivers/staging/media/hantro/hantro_hw.h | 2 +
> > > > > > drivers/staging/media/hantro/imx8m_vpu_hw.c | 57 +++++++++++++++++++++
> > > > > > 4 files changed, 102 insertions(+)
> > > > > >
> > > > >
> > > > > Adam,
> > > > >
> > > > > That's for the patches!
> > > > >
> > > > > I tested just this series on top of v5.16-rc3 on an
> > > > > imx8mm-venice-gw73xx-0x and found that if I loop fluster I can end up
> > > > > getting a hang within 10 to 15 mins or so when imx8m_blk_ctrl_power_on
> > > > > is called for VPUMIX pd :
> > > > > while [ 1 ]; do uptime; ./fluster.py run -d GStreamer-VP8-V4L2SL-Gst1.0; done
> > > > > ...
> > > > > [ 618.838436] imx-pgc imx-pgc-domain.6: failed to command PGC
> > > > > [ 618.844407] imx8m-blk-ctrl 38330000.blk-ctrl: failed to power up bus domain
> > > > >
> > > > > I added prints in imx_pgc_power_{up,down} and
> > > > > imx8m_blk_ctrl_power_{on,off} to get some more context
> > > > > ...
> > > > > Ran 55/61 tests successfully in 8.685 secs
> > > > > 17:16:34 up 17 min, 0 users, load average: 3.97, 2.11, 0.93
> > > > > ********************************************************************************
> > > > > ********************
> > > > > Running test suite VP8-TEST-VECTORS with decoder GStreamer-VP8-V4L2SL-Gst1.0
> > > > > Using 4 parallel job(s)
> > > > > ********************************************************************************
> > > > > ********************
> > > > >
> > > > > [TEST SUITE ] (DECODER ) TEST VECTOR ... R
> > > > > ESULT
> > > > > ----------------------------------------------------------------------
> > > > > [ 1023.114806] imx8m_blk_ctrl_power_on vpublk-g1
> > > > > [ 1023.119669] imx_pgc_power_up vpumix
> > > > > [ 1023.124307] imx-pgc imx-pgc-domain.6: failed to command PGC
> > > > > [ 1023.130006] imx8m-blk-ctrl 38330000.blk-ctrl: failed to power up bus domain
> > > > >
> > > > > While this wouldn't be an issue with this series it does indicate we
> > > > > still have something racy in blk-ctrl. Can you reproduce this (and if
> > > > > not what kernel are you based on)? Perhaps you or Lucas have some
> > > > > ideas?
> > > > >
> > > > Did you have "[PATCH] soc: imx: gpcv2: Synchronously suspend MIX
> > > > domains" applied when running those tests? It has only recently been
> > > > picked up by Shawn and may have an influence on the bus domain
> > > > behavior.
> > > >
> > >
> > > Lucas,
> > >
> > > Good point. I did have that originally before I started pruning down
> > > to the bare minimum to reproduce the issue.
> > >
> > > I added it back and now I have the following:
> > > arm64: dts: imx8mm: Enable VPU-G1 and VPU-G2
> > > media: hantro: Add support for i.MX8M Mini
> > > soc: imx: gpcv2: keep i.MX8MM VPU-H1 bus clock active
> > > soc: imx: gpcv2: Synchronously suspend MIX domains
> > > Linux 5.16-rc3
> > >
> > > Here's the latest with that patch:
> > > ...
> > > [VP8-TEST-VECTORS] (GStreamer-VP8-V4L2SL-Gst1.0)
> > > vp80-00-comprehensive-007 ... Success
> > > [ 316.632373] imx8m_blk_ctrl_power_off vpublk-g1
> > > [ 316.636908] imx_pgc_power_down vpu-g1
> > > [ 316.640983] imx_pgc_power_down vpumix
> > > [ 316.756869] imx8m_blk_ctrl_power_on vpublk-g1
> > > [ 316.761360] imx_pgc_power_up vpumix
> > > [ 316.765985] imx-pgc imx-pgc-domain.6: failed to command PGC
> > > [ 316.772743] imx8m-blk-ctrl 38330000.blk-ctrl: failed to power up bus domain
> > > ^^^ hang
> >
> > Hm, I wonder if there's some broken error handling here somewhere, as a
> > failure to power up a domain shouldn't lead to a hang.
> >
> > However, that doesn't explain why the PGC isn't completing the request.
> > Can you try to extend the timeout some more. Even though I think that
> > 1msec should already be generous. Can you dump the content of the
> > GPC_PU_PGC_SW_PUP_REQ and GPC_A53_PU_PGC_PUP_STATUSn (all 3 of them)
> > registers, when the failure condition is hit?
>
> I submitted a patch [1] to enable the commented-out if statement
> which waits for the handshake if the gpc domain was invoked by the
> blk-ctrl or we knew if the bus clock was operational.
>
> I am not 100% certain it can work as-is with the vpumix, but based on
> what I've seen from my testing, it's not hanging or causing errors.
>
> [1] - https://lore.kernel.org/linux-arm-kernel/20211120194900.1309914-1-aford173@gmail.com/T/
>
> I didn't have it applied to my latest RFC for the G1 and G2 because I
> had not noticed a change in behavior one way or the other with that
> patch.
That's not going to work with all the MIX domains. The handshake
requires some clocks to be enabled in the blk-ctrl (the secondary clock
gates in the blk-ctrl) to work properly. This is only done by the blk-
ctrl driver _after_ the GPC bus domain is powered up, so you can not
wait for the handshake to complete inside the GPC power up routine.
Regards,
Lucas
More information about the linux-arm-kernel
mailing list