[RFC please help] membarrier: Rewrite sync_core_before_usermode()
Nicholas Piggin
npiggin at gmail.com
Mon Dec 28 19:36:16 EST 2020
Excerpts from Andy Lutomirski's message of December 29, 2020 7:06 am:
> On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
> <mathieu.desnoyers at efficios.com> wrote:
>>
>> ----- On Dec 28, 2020, at 2:44 PM, Andy Lutomirski luto at kernel.org wrote:
>>
>> > On Mon, Dec 28, 2020 at 11:09 AM Russell King - ARM Linux admin
>> > <linux at armlinux.org.uk> wrote:
>> >>
>> >> On Mon, Dec 28, 2020 at 07:29:34PM +0100, Jann Horn wrote:
>> >> > After chatting with rmk about this (but without claiming that any of
>> >> > this is his opinion), based on the manpage, I think membarrier()
>> >> > currently doesn't really claim to be synchronizing caches? It just
>> >> > serializes cores. So arguably if userspace wants to use membarrier()
>> >> > to synchronize code changes, userspace should first do the code
>> >> > change, then flush icache as appropriate for the architecture, and
>> >> > then do the membarrier() to ensure that the old code is unused?
>>
>> ^ exactly, yes.
>>
>> >> >
>> >> > For 32-bit arm, rmk pointed out that that would be the cacheflush()
>> >> > syscall. That might cause you to end up with two IPIs instead of one
>> >> > in total, but we probably don't care _that_ much about extra IPIs on
>> >> > 32-bit arm?
>>
>> This was the original thinking, yes. The cacheflush IPI will flush specific
>> regions of code, and the membarrier IPI issues context synchronizing
>> instructions.
APIs should be written in terms of the service they provide to
userspace, and in highest level terms as possible, rather than directing
hardware to do some low level operation. Unfortunately we're stuck with
this for now. We could deprecate it and replace it though.
If userspace wants to modify code and ensure that after the system call
returns then no other thread will be executing the previous code, then
there should be an API for that. It could actually combine the two IPIs
into one for architectures that require both too.
>>
>> Architectures with coherent i/d caches don't need the cacheflush step.
>
> There are different levels of coherency -- VIVT architectures may have
> differing requirements compared to PIPT, etc.
>
> In any case, I feel like the approach taken by the documentation is
> fundamentally confusing. Architectures don't all speak the same
> language How about something like:
>
> The SYNC_CORE operation causes all threads in the caller's address
> space (including the caller) to execute an architecture-defined
> barrier operation. membarrier() will ensure that this barrier is
> executed at a time such that all data writes done by the calling
> thread before membarrier() are made visible by the barrier.
> Additional architecture-dependent cache management operations may be
> required to use this for JIT code.
As said this isn't what SYNC_CORE does, and it's not what powerpc
context synchronizing instructions do either, it will very much
re-order visibility of stores around such an instruction.
A thread completes store instructions into a store queue, which is
as far as a context synchronizing event goes. Visibility comes at
some indeterminite time later.
Also note that the regular membarrier syscall which does order
memory also does not make writes visible, it just ensures an
ordering.
>
> x86: SYNC_CORE executes a barrier that will cause subsequent
> instruction fetches to observe prior writes. Currently this will be a
I would be surprised if x86's serializing instructions were different
than powerpc. rdtsc ordering or flushing stores to cache would be
surprising.
Thanks,
Nick
> "serializing" instruction, but, if future improved CPU documentation
> becomes available and relaxes this requirement, the barrier may
> change. The kernel guarantees that writing new or modified
> instructions to normal memory (and issuing SFENCE if the writes were
> non-temporal) then doing a membarrier SYNC_CORE operation is
> sufficient to cause all threads in the caller's address space to
> execute the new or modified instructions. This is true regardless of
> whether or not those instructions are written at the same virtual
> address from which they are subsequently executed. No additional
> cache management is required on x86.
>
> arm: Something about the cache management syscalls.
>
> arm64: Ditto
>
> powerpc: I have no idea.
>
More information about the linux-arm-kernel
mailing list