[PATCH v8 2/9] mmap: make mlock_future_check() global
David Hildenbrand
david at redhat.com
Thu Nov 12 11:22:00 EST 2020
On 10.11.20 19:06, Mike Rapoport wrote:
> On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
>> On 10.11.20 16:14, Mike Rapoport wrote:
>>> From: Mike Rapoport <rppt at linux.ibm.com>
>>>
>>> It will be used by the upcoming secret memory implementation.
>>>
>>> Signed-off-by: Mike Rapoport <rppt at linux.ibm.com>
>>> ---
>>> mm/internal.h | 3 +++
>>> mm/mmap.c | 5 ++---
>>> 2 files changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index c43ccdddb0f6..ae146a260b14 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
>>> extern void mlock_vma_page(struct page *page);
>>> extern unsigned int munlock_vma_page(struct page *page);
>>> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>> + unsigned long len);
>>> +
>>> /*
>>> * Clear the page's PageMlocked(). This can be useful in a situation where
>>> * we want to unconditionally remove a page from the pagecache -- e.g.,
>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>> index 61f72b09d990..c481f088bd50 100644
>>> --- a/mm/mmap.c
>>> +++ b/mm/mmap.c
>>> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
>>> return hint;
>>> }
>>> -static inline int mlock_future_check(struct mm_struct *mm,
>>> - unsigned long flags,
>>> - unsigned long len)
>>> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>> + unsigned long len)
>>> {
>>> unsigned long locked, lock_limit;
>>>
>>
>> So, an interesting question is if you actually want to charge secretmem
>> pages against mlock now, or if you want a dedicated secretmem cgroup
>> controller instead?
>
> Well, with the current implementation there are three limits an
> administrator can use to control secretmem limits: mlock, memcg and
> kernel parameter.
>
> The kernel parameter puts a global upper limit for secretmem usage,
> memcg accounts all secretmem allocations, including the unused memory in
> large pages caching and mlock allows per task limit for secretmem
> mappings, well, like mlock does.
>
> I didn't consider a dedicated cgroup, as it seems we already have enough
> existing knobs and a new one would be unnecessary.
To me it feels like the mlock() limit is a wrong fit for secretmem. But
maybe there are other cases of using the mlock() limit without actually
doing mlock() that I am not aware of (most probably :) )?
I mean, my concern is not earth shattering, this can be reworked later.
As I said, it just feels wrong.
--
Thanks,
David / dhildenb
More information about the linux-riscv
mailing list