[PATCH v12 07/10] secretmem: add memcg accounting

Roman Gushchin guro at fb.com
Mon Nov 30 15:15:40 EST 2020


On Sun, Nov 29, 2020 at 07:26:25PM +0200, Mike Rapoport wrote:
> On Sun, Nov 29, 2020 at 07:53:45AM -0800, Shakeel Butt wrote:
> > On Wed, Nov 25, 2020 at 1:51 AM Mike Rapoport <rppt at kernel.org> wrote:
> > >
> > > From: Mike Rapoport <rppt at linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
> > >
> > > Signed-off-by: Mike Rapoport <rppt at linux.ibm.com>
> > > Acked-by: Roman Gushchin <guro at fb.com>
> > > ---
> > >  mm/filemap.c   |  3 ++-
> > >  mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
> > >  2 files changed, 37 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > index 249cf489f5df..cf7f1dc9f4b8 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -42,6 +42,7 @@
> > >  #include <linux/psi.h>
> > >  #include <linux/ramfs.h>
> > >  #include <linux/page_idle.h>
> > > +#include <linux/secretmem.h>
> > >  #include "internal.h"
> > >
> > >  #define CREATE_TRACE_POINTS
> > > @@ -844,7 +845,7 @@ static noinline int __add_to_page_cache_locked(struct page *page,
> > >         page->mapping = mapping;
> > >         page->index = offset;
> > >
> > > -       if (!huge) {
> > > +       if (!huge && !page_is_secretmem(page)) {
> > >                 error = mem_cgroup_charge(page, current->mm, gfp);
> > >                 if (error)
> > >                         goto error;
> > > diff --git a/mm/secretmem.c b/mm/secretmem.c
> > > index 52a900a135a5..eb6628390444 100644
> > > --- a/mm/secretmem.c
> > > +++ b/mm/secretmem.c
> > > @@ -18,6 +18,7 @@
> > >  #include <linux/memblock.h>
> > >  #include <linux/pseudo_fs.h>
> > >  #include <linux/secretmem.h>
> > > +#include <linux/memcontrol.h>
> > >  #include <linux/set_memory.h>
> > >  #include <linux/sched/signal.h>
> > >
> > > @@ -44,6 +45,32 @@ struct secretmem_ctx {
> > >
> > >  static struct cma *secretmem_cma;
> > >
> > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > +{
> > > +       int err;
> > > +
> > > +       err = memcg_kmem_charge_page(page, gfp, order);
> > > +       if (err)
> > > +               return err;
> > > +
> > > +       /*
> > > +        * seceremem caches are unreclaimable kernel allocations, so treat
> > > +        * them as unreclaimable slab memory for VM statistics purposes
> > > +        */
> > > +       mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
> > > +                           PAGE_SIZE << order);
> > 
> > Please use mod_lruvec_page_state() instead, so we get the memcg stats too.
> 
> Ok
> 
> > BTW I think secretmem deserves a vmstat entry instead of overloading
> > NR_SLAB_UNRECLAIMABLE_B.
> 
> I'd prefer to wait with a dedicated vmstat for now. We can always add it
> later, once we have better picture of secremem usage.

+1 here.

>From what I understand it's not clear now how big typical secret areas will be.
If there will be few 2Mb areas per container (like for storing some keys),
IMO it doesn't justify adding a separate counter. If they will be measured
in GBs, then we'll add it later.

Thanks!



More information about the linux-arm-kernel mailing list