[PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
Mike Rapoport
rppt at kernel.org
Tue Sep 29 09:05:29 EDT 2020
On Fri, Sep 25, 2020 at 09:41:25AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt at linux.ibm.com>
> >
> > Removing a PAGE_SIZE page from the direct map every time such page is
> > allocated for a secret memory mapping will cause severe fragmentation of
> > the direct map. This fragmentation can be reduced by using PMD-size pages
> > as a pool for small pages for secret memory mappings.
> >
> > Add a gen_pool per secretmem inode and lazily populate this pool with
> > PMD-size pages.
>
> What's the actual efficacy of this? Since the pmd is per inode, all I
> need is a lot of inodes and we're in business to destroy the directmap,
> no?
>
> Afaict there's no privs needed to use this, all a process needs is to
> stay below the mlock limit, so a 'fork-bomb' that maps a single secret
> page will utterly destroy the direct map.
This indeed will cause 1G pages in the direct map to be split into 2M
chunks, but I disagree with 'destroy' term here. Citing the cover letter
of an earlier version of this series:
I've tried to find some numbers that show the benefit of using larger
pages in the direct map, but I couldn't find anything so I've run a
couple of benchmarks from phoronix-test-suite on my laptop (i7-8650U
with 32G RAM).
I've tested three variants: the default with 28G of the physical
memory covered with 1G pages, then I disabled 1G pages using
"nogbpages" in the kernel command line and at last I've forced the
entire direct map to use 4K pages using a simple patch to
arch/x86/mm/init.c. I've made runs of the benchmarks with SSD and
tmpfs.
Surprisingly, the results does not show huge advantage for large
pages. For instance, here the results for kernel build with
'make -j8', in seconds:
| 1G | 2M | 4K
----------------------+--------+--------+---------
ssd, mitigations=on | 308.75 | 317.37 | 314.9
ssd, mitigations=off | 305.25 | 295.32 | 304.92
ram, mitigations=on | 301.58 | 322.49 | 306.54
ram, mitigations=off | 299.32 | 288.44 | 310.65
All the results I have are available here:
https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
The numbers suggest that using smaller pages in the direct map does not
necessarily leads to performance degradation and some runs produced
better results with smaller pages in the direct map.
> I really don't like this, at all.
>
> IIRC Kirill looked at merging the directmap. I think he ran into
> performance issues there, but we really need something like that before
> something like this lands.
--
Sincerely yours,
Mike.
More information about the linux-riscv
mailing list