[PATCH v1 1/2] dt-bindings: firmware: thead,th1520: Add clocks and resets
Ulf Hansson
ulf.hansson at linaro.org
Thu Apr 10 05:34:14 PDT 2025
On Thu, 10 Apr 2025 at 12:42, Michal Wilczynski
<m.wilczynski at samsung.com> wrote:
>
>
>
> On 4/9/25 12:41, Ulf Hansson wrote:
> > On Wed, 9 Apr 2025 at 11:30, Michal Wilczynski <m.wilczynski at samsung.com> wrote:
> >>
> >> Prepare for handling GPU clock and reset sequencing through a generic
> >> power domain by adding clock and reset properties to the TH1520 AON
> >> firmware bindings.
> >>
> >> The T-HEAD TH1520 GPU requires coordinated management of two clocks
> >> (core and sys) and two resets (GPU and GPU CLKGEN). Due to SoC-specific
> >> requirements, the CLKGEN reset must be carefully managed alongside clock
> >> enables to ensure proper GPU operation, as discussed on the mailing list
> >> [1].
> >>
> >> Since the coordination is now handled through a power domain, only the
> >> programmable clocks (core and sys) are exposed. The GPU MEM clock is
> >> ignored, as it is not controllable on the TH1520 SoC.
> >>
> >> This approach follows upstream maintainers' recommendations [1] to
> >> avoid SoC-specific details leaking into the GPU driver or clock/reset
> >> frameworks directly.
> >>
> >> [1] - https://lore.kernel.org/all/38d9650fc11a674c8b689d6bab937acf@kernel.org/
> >>
> >> Signed-off-by: Michal Wilczynski <m.wilczynski at samsung.com>
> >> ---
> >> .../bindings/firmware/thead,th1520-aon.yaml | 28 +++++++++++++++++++
> >> 1 file changed, 28 insertions(+)
> >>
> >> diff --git a/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml b/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
> >> index bbc183200400..8075874bcd6b 100644
> >> --- a/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
> >> +++ b/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
> >> @@ -25,6 +25,16 @@ properties:
> >> compatible:
> >> const: thead,th1520-aon
> >>
> >> + clocks:
> >> + items:
> >> + - description: GPU core clock
> >> + - description: GPU sys clock
> >> +
> >> + clock-names:
> >> + items:
> >> + - const: gpu-core
> >> + - const: gpu-sys
> >
> > These clocks don't look like they belong to the power-domain node, but
> > rather the GPU's node.
> >
> > Or is this in fact the correct description of the HW?
>
> Hi,
> Thank you for your input. Based on my understanding of Stephen
> presentation the power-domain layer could act as a middleware layer
> (like ACPI) that could own resources. That being said it was also stated
> that the proposed approach should work with already existing device
> trees, which implies that the DT should remain as is.
>
> So I could get the resources using attach_dev and detach_dev, but there
> are two problems with that:
>
> 1) The GPU driver will try to manage clocks/reset on it's own using those functions
> if I provide non-stub working clocks and reset:
> static const struct dev_pm_ops pvr_pm_ops = {
> RUNTIME_PM_OPS(pvr_power_device_suspend, pvr_power_device_resume,
> pvr_power_device_idle)
> };
>
> So obviously I should invent a way to tell the drm/imagination driver to
> NOT manage. One obvious way to do this is to introduce new flag to genpd.flags
> called let's say GENPD_FLAG_EXCLUSIVE_CONTROL, which would tell the consumer
> driver that the power management is being done only done from the PM
> middleware driver.
Something along those lines. Although, I think the below twist to the
approach would be better.
Some flag (maybe just a bool) should be set dynamically when the
->attach_dev() callback is invoked and it should be a per device flag,
not a per genpd flag. In this way, the genpd provider driver can make
runtime decisions, perhaps even based on some DT compatible string for
the device being attached to it, whether it should manage PM resources
or not.
Additionally, we need a new genpd helper function that allows the
consumer driver to check if the PM resources are managed from the PM
domain level (genpd) or not.
If it sounds complicated, just let me know I can try to help put the
pieces together.
>
> 2) The GPU node doesn't want to own the gpu-clkgen reset. In fact nobody
> seems to want to own it, even though theoretically it should be owned by
> the clk_vo as this would describe the hardware best (it's resetting the
> GPU clocks). But then it would be trickier to get it from the PM driver,
> making the code more complex and harder to understand. Nonetheless I
> think it would work.
I guess it doesn't really matter to me. Perhaps model it as a reset
and make the GPU be the consumer of it?
>
> If this sounds good to you I will work on the code.
Sure, let's give this a try - I am here to help review and guide the best I can.
[...]
Kind regards
Uffe
More information about the linux-riscv
mailing list