[RFC PATCH 00/10] Add Fujitsu A64FX soc entry/hardware barrier driver

Arnd Bergmann arnd at kernel.org
Fri Jan 15 07:24:43 EST 2021


On Fri, Jan 15, 2021 at 12:10 PM misono.tomohiro at fujitsu.com
<misono.tomohiro at fujitsu.com> wrote:
> > On Tue, Jan 12, 2021 at 11:24 AM misono.tomohiro at fujitsu.com <misono.tomohiro at fujitsu.com> wrote:

> > > Also, It is common usage that each running thread is bound to one PE in
> > > multi-threaded HPC applications.
> >
> > I think the expectation that all threads are bound to a physical CPU
> > makes sense for using this feature, but I think it would be necessary
> > to enforce that, e.g. by allowing only threads to enable it after they
> > are isolated to a non-shared CPU, and automatically disabling it
> > if the CPU isolation is changed.
> >
> > For the user space interface, something based on process IDs
> > seems to make more sense to me than something based on CPU
> > numbers. All of the above does require some level of integration
> > with the core kernel of course.
> >
> > I think the next step would be to try to come up with a high-level
> > user interface design that has a chance to get merged, rather than
> > addressing the review comments for the current implementation.
>
> Understood. One question is that high-level interface such as process
> based control could solve several problems (i.e. access control/force binding),
> I cannot eliminate access to IMP-DEF registers from EL0 as I explained
> above. Is it acceptable in your sense?

I think you will get different answers for that depending on who you ask ;-)

I'm generally ok with it, given that it will only affect a very small
number of specialized applications that are already built for
a specific microarchitecture for performance reasons. E.g. when
using an arm64 BLAS library, you would use different versions
of the same functions depending on CPU support for NEON,
SVE, SVE2, Apple AMX (which also uses imp-def instructions),
ARMv8.6 GEMM extensions, and likely a hand-optimized
version for the A64FX pipeline. Having a version for A64FX with
hardware barriers adds (at most) one more code path but hopefully
does not add complexity to the common code.

> > Aside from the user interface question, it would be good to
> > understand the performance impact of the feature.
> > As I understand it, the entire purpose is to make things faster, so
> > to put it in perspective compared to the burden of adding an
> > interface, there should be some numbers: What are the kinds of
> > applications that would use it in practice, and how much faster are
> > they compared to not having it?
>
> Microbenchmark shows it takes around 250ns for 1 synchronization for
> 12 PEs with hardware barrier and it is multiple times faster than software
> barrier (only measuring core synchronization logic and excluding setup time).
> I don't have application results at this point and will share when I could get some.

Thanks. That will be helpful indeed. Please also include information
about what you are comparing against for the software barrier. E.g.
Is that based on a futex() system call, or completely implemented
in user space?

      Arnd



More information about the linux-arm-kernel mailing list