[PATCH v17 07/10] mm: introduce memfd_secret system call to create "secret" memory areas

Michal Hocko mhocko at suse.com
Tue Feb 9 08:17:11 EST 2021


On Tue 09-02-21 11:09:38, Mike Rapoport wrote:
> On Tue, Feb 09, 2021 at 09:47:08AM +0100, Michal Hocko wrote:
> > On Mon 08-02-21 23:26:05, Mike Rapoport wrote:
> > > On Mon, Feb 08, 2021 at 11:49:22AM +0100, Michal Hocko wrote:
> > > > On Mon 08-02-21 10:49:17, Mike Rapoport wrote:
> > [...]
> > > > > The file descriptor based memory has several advantages over the
> > > > > "traditional" mm interfaces, such as mlock(), mprotect(), madvise(). It
> > > > > paves the way for VMMs to remove the secret memory range from the process;
> > > > 
> > > > I do not understand how it helps to remove the memory from the process
> > > > as the interface explicitly allows to add a memory that is removed from
> > > > all other processes via direct map.
> > > 
> > > The current implementation does not help to remove the memory from the
> > > process, but using fd-backed memory seems a better interface to remove
> > > guest memory from host mappings than mmap. As Andy nicely put it:
> > > 
> > > "Getting fd-backed memory into a guest will take some possibly major work in
> > > the kernel, but getting vma-backed memory into a guest without mapping it
> > > in the host user address space seems much, much worse."
> > 
> > OK, so IIUC this means that the model is to hand over memory from host
> > to guest. I thought the guest would be under control of its address
> > space and therefore it operates on the VMAs. This would benefit from
> > an additional and more specific clarification.
> 
> How guest would operate on VMAs if the interface between host and guest is
> virtual hardware?

I have to say that I am not really familiar with this area so my view
might be misleading or completely wrong. I thought that the HW address
ranges are mapped to the guest process and therefore have a VMA.

> If you mean qemu (or any other userspace part of VMM that uses KVM), so one
> of the points Andy mentioned back than is to remove mappings of the guest
> memory from the qemu process.
>  
> > > > > As secret memory implementation is not an extension of tmpfs or hugetlbfs,
> > > > > usage of a dedicated system call rather than hooking new functionality into
> > > > > memfd_create(2) emphasises that memfd_secret(2) has different semantics and
> > > > > allows better upwards compatibility.
> > > > 
> > > > What is this supposed to mean? What are differences?
> > > 
> > > Well, the phrasing could be better indeed. That supposed to mean that
> > > they differ in the semantics behind the file descriptor: memfd_create
> > > implements sealing for shmem and hugetlbfs while memfd_secret implements
> > > memory hidden from the kernel.
> > 
> > Right but why memfd_create model is not sufficient for the usecase?
> > Please note that I am arguing against. To be honest I do not really care
> > much. Using an existing scheme is usually preferable from my POV but
> > there might be real reasons why shmem as a backing "storage" is not
> > appropriate.
>    
> Citing my older email:
> 
>     I've hesitated whether to continue to use new flags to memfd_create() or to
>     add a new system call and I've decided to use a new system call after I've
>     started to look into man pages update. There would have been two completely
>     independent descriptions and I think it would have been very confusing.

Could you elaborate? Unmapping from the kernel address space can work
both for sealed or hugetlb memfds, no? Those features are completely
orthogonal AFAICS. With a dedicated syscall you will need to introduce
this functionality on top if that is required. Have you considered that?
I mean hugetlb pages are used to back guest memory very often. Is this
something that will be a secret memory usecase?

Please be really specific when giving arguments to back a new syscall
decision.
-- 
Michal Hocko
SUSE Labs



More information about the linux-riscv mailing list