[RFC PATCH 00/10] Add Fujitsu A64FX soc entry/hardware barrier driver
misono.tomohiro at fujitsu.com
misono.tomohiro at fujitsu.com
Fri Jan 15 06:10:13 EST 2021
> On Tue, Jan 12, 2021 at 11:24 AM misono.tomohiro at fujitsu.com
> <misono.tomohiro at fujitsu.com> wrote:
> > > On Fri, Jan 8, 2021 at 1:54 PM Mark Rutland <mark.rutland at arm.com> wrote:
> > However, I don't know any other processors having similar
> > features at this point and it is hard to provide common abstraction interface.
> > I would appreciate should anyone have any information.
>
> The specification you pointed to mentions the SPARC64 XIfx, so
> at a minimum, a user interface should be designed to also work on
> whatever register-level interface that provides.
Those our previous CPUs have hardware barrier function too, but they are
not currently used (I believe the hardware design shares common idea and
this driver logic/ioctl interface could be applicable to both).
> > > > Secondly, the intended usage model appears to expose this to EL0 for
> > > > direct access, and the code seems to depend on threads being pinned, but
> > > > AFAICT this is not enforced and there is no provision for
> > > > context-switch, thread migration, or interaction with ptrace. I fear
> > > > this is going to be very fragile in practice, and that extending that
> > > > support in future will require much more complexity than is currently
> > > > apparent, with potentially invasive changes to arch code.
> > >
> > > Right, this is the main problem I see, too. I had not even realized
> > > that this will have to tie in with user space threads in some form, but
> > > you are right that once this has to interact with the CPU scheduler,
> > > it all breaks down.
> >
> > This observation is right. I thought adding context switch etc. support for
> > implementation defined registers requires core arch code changes and
> > it is far less acceptable. So, I tried to confine code change in a module with
> > these restrictions.
>
> My feeling is that having the code separate from where it would belong
> in an operating system that was designed specifically for this feature
> ends up being no better than rewriting the core scheduling code.
>
> As Mark said, it may well be that neither approach would be sufficient
> for an upstream merge. On the other hand, keeping the code in a
> separate loadable module does make most sense if we end up
> not merging it at all, in which case this is the easiest to port
> between kernel versions.
>
> > Regarding direct access from EL0, it is necessary for realizing fast synchronization
> > as this enables synchronization logic in user application check if all threads have
> > reached at synchronization point without switching to kernel.
>
> Ok, I see.
>
> > Also, It is common usage that each running thread is bound to one PE in
> > multi-threaded HPC applications.
>
> I think the expectation that all threads are bound to a physical CPU
> makes sense for using this feature, but I think it would be necessary
> to enforce that, e.g. by allowing only threads to enable it after they
> are isolated to a non-shared CPU, and automatically disabling it
> if the CPU isolation is changed.
>
> For the user space interface, something based on process IDs
> seems to make more sense to me than something based on CPU
> numbers. All of the above does require some level of integration
> with the core kernel of course.
>
> I think the next step would be to try to come up with a high-level
> user interface design that has a chance to get merged, rather than
> addressing the review comments for the current implementation.
Understood. One question is that high-level interface such as process
based control could solve several problems (i.e. access control/force binding),
I cannot eliminate access to IMP-DEF registers from EL0 as I exaplained
above. Is it acceptable in your sense?
> Aside from the user interface question, it would be good to
> understand the performance impact of the feature.
> As I understand it, the entire purpose is to make things faster, so
> to put it in perspective compared to the burden of adding an
> interface, there should be some numbers: What are the kinds of
> applications that would use it in practice, and how much faster are
> they compared to not having it?
Microbenchmark shows it takes around 250ns for 1 synchronization for
12 PEs with hardware barrier and it is multiple times faster than software
barrier (only measuring core synchronization logic and excluding setup time).
I don't have application results at this point and will share when I could get some.
Regards,
Tomohiro
More information about the linux-arm-kernel
mailing list