[PATCH v6 3/4] drivers/perf: add DesignWare PCIe PMU driver
Jonathan Cameron
Jonathan.Cameron at Huawei.com
Fri Jul 28 08:20:30 PDT 2023
...
> >>> +
> >>> +static int dwc_pcie_pmu_online_cpu(unsigned int cpu, struct hlist_node *cpuhp_node)
> >>> +{
> >>> + struct dwc_pcie_pmu *pcie_pmu;
> >>> + struct pci_dev *pdev;
> >>> + int node;
> >>> +
> >>> + pcie_pmu = hlist_entry_safe(cpuhp_node, struct dwc_pcie_pmu, cpuhp_node);
> >>> + pdev = pcie_pmu->pdev;
> >>> + node = dev_to_node(&pdev->dev);
> >>> +
> >>> + if (node != NUMA_NO_NODE && cpu_to_node(pcie_pmu->oncpu) != node &&
> >
> > Perhaps worth a comment on when you might see node == NUMA_NO_NODE?
> > Beyond NUMA being entirely disabled, I'd hope that never happens and for that you
> > might be able to use a compile time check.
> >
> > I wonder if this can be simplified by a flag that says if we are already in the
> > right node? Might be easier to follow than having similar dance in online and offline
> > to figure that out.
>
> Ok, I will add a comment for NUMA_NO_NODE. If no numa support, I think
> any CPU is fine to bind.
Agreed. I would add a comment on that being the intent.
>
> pcie_pmu->on_cpu may be a good choise to be used as a flag, right? pcie_pmu->on_cpu
> will be set as -1 when pcie_pmu is allocated and then check in
> dwc_pcie_pmu_online_cpu() first.
I think you still want to know whether it's in the right node - as maybe
there are no local CPUs available at startup.
>
> Then, the code will be:
>
> static int dwc_pcie_pmu_online_cpu(unsigned int cpu, struct hlist_node *cpuhp_node)
> {
> struct dwc_pcie_pmu *pcie_pmu;
> struct pci_dev *pdev;
> int node;
>
> pcie_pmu = hlist_entry_safe(cpuhp_node, struct dwc_pcie_pmu, cpuhp_node);
> /* If another CPU is already managing this PMU, simply return. */
> if (pcie_pmu->on_cpu != -1)
> return 0;
>
> pdev = pcie_pmu->pdev;
> node = dev_to_node(&pdev->dev);
>
> /* Select the first CPU if no numa support. */
> if (node == NUMA_NO_NODE)
> pcie_pmu->on_cpu = cpu;
> else if (cpu_to_node(pcie_pmu->on_cpu) != node &&
> cpu_to_node(cpu) == node)
> dwc_pcie_pmu_migrate(pcie_pmu, cpu);
>
> return 0;
> }
> >
> >>> +static int __init dwc_pcie_pmu_init(void)
> >>> +{
> >>> + int ret;
> >>> +
> >>> + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
> >>> + "perf/dwc_pcie_pmu:online",
> >>> + dwc_pcie_pmu_online_cpu,
> >>> + dwc_pcie_pmu_offline_cpu);
> >>> + if (ret < 0)
> >>> + return ret;
> >>> +
> >>> + dwc_pcie_pmu_hp_state = ret;
> >>> +
> >>> + ret = platform_driver_register(&dwc_pcie_pmu_driver);
> >>> + if (ret) {
> >>> + cpuhp_remove_multi_state(dwc_pcie_pmu_hp_state);
> >>> + return ret;
> >>> + }
> >>> +
> >>> + dwc_pcie_pmu_dev = platform_device_register_simple(
> >>> + "dwc_pcie_pmu", PLATFORM_DEVID_NONE, NULL, 0);
> >>> + if (IS_ERR(dwc_pcie_pmu_dev)) {
> >>> + platform_driver_unregister(&dwc_pcie_pmu_driver);
> >>
> >> On failure we also need to remove cpuhp state as well.
> >
> > I'd suggest using gotos and a single error handling block. That
> > makes it both harder to forget things like this and easier to
> > compare that block with what happens in exit() - so slightly
> > easier to review!
>
> Given that we have a appropriate way to tear down the PMUs via devm_add_action_or_reset(),
> I am going to remove the redundant probe/remove framework via platform_driver_{un}register().
> for_each probe process in __dwc_pcie_pmu_probe() will be move into dwc_pcie_pmu_init().
> Is it a better way?
I think I'd prefer to see a standard driver creation / probe flow even if you could in theory
avoid it.
Jonathan
>
> Thank you very much for your valuable comments.
>
> Best Regards,
> Shuai
>
>
More information about the linux-arm-kernel
mailing list