Excessive TLB flush ranges

Nadav Amit nadav.amit at gmail.com
Tue May 16 18:23:27 PDT 2023


> On May 16, 2023, at 5:23 PM, Thomas Gleixner <tglx at linutronix.de> wrote:
> 
> On Tue, May 16 2023 at 21:32, Thomas Gleixner wrote:
>> On Tue, May 16 2023 at 10:56, Nadav Amit wrote:
>>>> On May 16, 2023, at 7:38 AM, Thomas Gleixner <tglx at linutronix.de> wrote:
>>>> 
>>>> There is a world outside of x86, but even on x86 it's borderline silly
>>>> to take the whole TLB out when you can flush 3 TLB entries one by one
>>>> with exactly the same number of IPIs, i.e. _one_. No?
>>> 
>>> I just want to re-raise points that were made in the past, including in
>>> the discussion that I sent before and match my experience.
>>> 
>>> Feel free to reject them, but I think you should not ignore them.
>> 
>> I'm not ignoring them and I'm well aware of these issues. No need to
>> repeat them over and over. I'm old but not senile yet.

Thomas, no disrespect was intended. I initially just sent the link and I
had a sense (based on my past experience) that nobody clicked on it.

> 
> Just to be clear. This works the other way round too.
> 
> It makes a whole lot of a difference whether you do 5 IPIs in a row
> which all need to get a cache line updated or if you have _one_ which
> needs a couple of cache lines updated.

Obviously, if the question is 5 IPIs or 1 IPI with more flushing data,
the 1 IPI wins. The question I was focusing on is whether 1 IPI with
potentially global flush or detailed list of ranges to flush.  

> 
> INVLPG is not serializing so the CPU can pull in the next required cache
> line(s) on the VA list during that.

Indeed, but ChatGPT says (yes, I see you making fun of me already):
“however, this doesn't mean INVLPG has no impact on the pipeline. INVLPG
can cause a pipeline stall because the TLB entry invalidation must be
completed before subsequent instructions that might rely on the TLB can
be executed correctly.”

So I am not sure that your claim is exactly correct.

> These cache lines are _not_
> contended at that point because _all_ of these data structures are not
> longer globally accessible (mis-speculation aside) and therefore not
> exclusive (misalignment aside, but you have to prove that this is an
> issue).

This is not entirely true. Indeed whether you have 1 remote core or N
remote core is not a whole issue (putting aside NUMA). But you will get
first a snoop to the initiator cache by the responding core, and then,
after the TLB invalidation is completed, an RFO by the initiator once
it writes to the cache again. If the invalidation data is on the stack
(as you did), this is even more likely to happen shortly after.

> 
> So just dismissing this on 10 years old experience is not really
> helpful, though I'm happy to confirm your points once I had the time and
> opportunity to actually run real testing over it, unless you beat me to
> it.

I really don’t know what “dismissing” you are talking about. I do have
relatively recent experience with the overhead of caching effects on
TLB shootdown time. It can become very apparent. You can find some
numbers in, for instance, the patch of mine I quoted in my previous
email.

There are additional opportunities to reduce the caching effect for
x86, such as combining the SMP-code metadata with the TLB-invalidation
metadata (which is out of the scope) that I saw having performance
benefit. That’s all to say that caching effect is not something to
be considered obsolete.

> 
> What I can confirm is that it solves a real world problem on !x86
> machines for the pathological case at hand
> 
>   On the affected contemporary ARM32 machine, which does not require
>   IPIs, the selective flush is way better than:
> 
>   - the silly 1.G range one page by one flush (which is silly on its
>     own as there is no range check)
> 
>   - a full tlb flush just for 3 pages, which is the same on x86 albeit
>     the flush range is ~64GB there.
> 
> The point is that the generic vmalloc code is making assumptions which
> are x86 centric on not even necessarily true on x86.
> 
> Whether or not this is benefitial on x86 that's a completey separate
> debate.

I fully understand that if you reduce multiple TLB shootdowns (IPI-wise)
into 1, it is (pretty much) all benefit and there is no tradeoff. I was
focusing on the question of whether it is beneficial also to do precise
TLB flushing, and the tradeoff there is less clear (especially that the
kernel uses 2MB pages).

My experience with non-IPI based TLB invalidations is more limited. IIUC
the usage model is that the TLB shootdowns should be invoked ASAP
(perhaps each range can be batched, but there is no sense of batching
multiple ranges), and then later you would issue some barrier to ensure
prior TLB shootdown invocations have been completed.

If that is the (use) case, I am not sure the abstraction you used in
your prototype is the best one.


> There is also a debate required whether a wholesale "flush on _ALL_
> CPUs' is justified when some of those CPUs are completely isolated and
> have absolutely no chance to be affected by that. This process bound
> seccomp/BPF muck clearly does not justify to kick isolated CPUs out of
> their computation in user space just because…

I hope you would excuse my ignorance (I am sure you won’t), but isn’t
the seccomp/BPF VMAP ranges are mapped on all processes (considering
PTI of course)? Are you suggesting you want a per-process kernel
address space? (which can make senes, I guess)




More information about the linux-arm-kernel mailing list