[RFC PATCH v11 00/29] KVM: guest_memfd() and per-page attributes
Nikunj A. Dadhania
nikunj at amd.com
Wed Jul 26 23:42:11 PDT 2023
On 7/26/2023 7:54 PM, Sean Christopherson wrote:
> On Wed, Jul 26, 2023, Nikunj A. Dadhania wrote:
>> On 7/24/2023 10:30 PM, Sean Christopherson wrote:
>>>> /proc/<qemu pid>/smaps
>>>> 7f528be00000-7f5c8be00000 rw-p 00000000 00:01 26629 /memfd:memory-backend-memfd-shared (deleted)
>>>> 7f5c90200000-7f5c90220000 rw-s 00000000 00:01 44033 /memfd:rom-backend-memfd-shared (deleted)
>>>> 7f5c90400000-7f5c90420000 rw-s 00000000 00:01 44032 /memfd:rom-backend-memfd-shared (deleted)
>>>> 7f5c90800000-7f5c90b7c000 rw-s 00000000 00:01 1025 /memfd:rom-backend-memfd-shared (deleted)
>>>
>>> This is all expected, and IMO correct. There are no userspace mappings, and so
>>> not accounting anything is working as intended.
>> Doesn't sound that correct, if 10 SNP guests are running each using 10GB, how
>> would we know who is using 100GB of memory?
>
> It's correct with respect to what the interfaces show, which is how much memory
> is *mapped* into userspace.
>
> As I said (or at least tried to say) in my first reply, I am not against exposing
> memory usage to userspace via stats, only that it's not obvious to me that the
> existing VMA-based stats are the most appropriate way to surface this information.
Right, then should we think in the line of creating a VM IOCTL for querying current memory
usage for guest-memfd ?
We could use memcg for statistics, but then memory cgroup can be disabled and so memcg
isn't really a dependable option.
Do you have some ideas on how to expose the memory usage to the user space other than
VMA-based stats ?
Regards,
Nikunj
More information about the kvm-riscv
mailing list