[PATCH v6 0/9] variable-order, large folios for anonymous memory
John Hubbard
jhubbard at nvidia.com
Sun Nov 12 19:57:47 PST 2023
On 9/29/23 4:44 AM, Ryan Roberts wrote:
> Hi All,
>
> This is v6 of a series to implement variable order, large folios for anonymous
> memory. (previously called "ANON_LARGE_FOLIO", "LARGE_ANON_FOLIO",
> "FLEXIBLE_THP", but now exposed as an extension to THP; "small-order THP"). The
> objective of this is to improve performance by allocating larger chunks of
> memory during anonymous page faults:
...
>
> The major change in this revision is the addition of sysfs controls to allow
> this "small-order THP" to be enabled/disabled/configured independently of
> PMD-order THP. The approach I've taken differs a bit from previous discussions;
> instead of creating a whole new interface ("large_folio"), I'm extending THP. I
> personally think this makes things clearer and more extensible. See [6] for
> detailed rationale.
>
Hi Ryan and all,
I've done some initial performance testing of this patchset on an arm64
SBSA server. When these patches are combined with the arm64 arch contpte
patches in Ryan's git tree (he has conveniently combined everything
here: [1]), we are seeing a remarkable, consistent speedup of 10.5x on
some memory-intensive workloads. Many test runs, conducted independently
by different engineers and on different machines, have convinced me and
my colleagues that this is an accurate result.
In order to achieve that result, we used the git tree in [1] with
following settings:
echo always >/sys/kernel/mm/transparent_hugepage/enabled
echo recommend >/sys/kernel/mm/transparent_hugepage/anon_orders
This was on a aarch64 machine configure to use a 64KB base page size.
That configuration means that the PMD size is 512MB, which is of course
too large for practical use as a pure PMD-THP. However, with with these
small-size (less than PMD-sized) THPs, we get the improvements in TLB
coverage, while still getting pages that are small enough to be
effectively usable.
These results are admittedly limited to aarch64 CPUs so far (because the
contpte TLB coalescing behavior plays a big role), but it's nice to see
real performance numbers from real computers.
Up until now, there has been some healthy discussion and debate about
various aspects of this patchset. This data point shows that at least
for some types of memory-intensive workloads (and I apologize for being
vague, at this point, about exactly *which* workloads), the performance
gains are really worth it: ~10x !
[1] https://gitlab.arm.com/linux-arm/linux-rr.git
(branch: features/granule_perf/anonfolio-v6-contpte-v2)
thanks,
--
John Hubbard
NVIDIA
More information about the linux-arm-kernel
mailing list