[RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory pin

Song Bao Hua (Barry Song) song.bao.hua at hisilicon.com
Sun Feb 7 17:24:28 EST 2021



> -----Original Message-----
> From: Matthew Wilcox [mailto:willy at infradead.org]
> Sent: Monday, February 8, 2021 10:34 AM
> To: Wangzhou (B) <wangzhou1 at hisilicon.com>
> Cc: linux-kernel at vger.kernel.org; iommu at lists.linux-foundation.org;
> linux-mm at kvack.org; linux-arm-kernel at lists.infradead.org;
> linux-api at vger.kernel.org; Andrew Morton <akpm at linux-foundation.org>;
> Alexander Viro <viro at zeniv.linux.org.uk>; gregkh at linuxfoundation.org; Song
> Bao Hua (Barry Song) <song.bao.hua at hisilicon.com>; jgg at ziepe.ca;
> kevin.tian at intel.com; jean-philippe at linaro.org; eric.auger at redhat.com;
> Liguozhu (Kenneth) <liguozhu at hisilicon.com>; zhangfei.gao at linaro.org;
> chensihang (A) <chensihang1 at hisilicon.com>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
> 
> On Sun, Feb 07, 2021 at 04:18:03PM +0800, Zhou Wang wrote:
> > SVA(share virtual address) offers a way for device to share process virtual
> > address space safely, which makes more convenient for user space device
> > driver coding. However, IO page faults may happen when doing DMA
> > operations. As the latency of IO page fault is relatively big, DMA
> > performance will be affected severely when there are IO page faults.
> > >From a long term view, DMA performance will be not stable.
> >
> > In high-performance I/O cases, accelerators might want to perform
> > I/O on a memory without IO page faults which can result in dramatically
> > increased latency. Current memory related APIs could not achieve this
> > requirement, e.g. mlock can only avoid memory to swap to backup device,
> > page migration can still trigger IO page fault.
> 
> Well ... we have two requirements.  The application wants to not take
> page faults.  The system wants to move the application to a different
> NUMA node in order to optimise overall performance.  Why should the
> application's desires take precedence over the kernel's desires?  And why
> should it be done this way rather than by the sysadmin using numactl to
> lock the application to a particular node?

NUMA balancer is just one of many reasons for page migration. Even one
simple alloc_pages() can cause memory migration in just single NUMA
node or UMA system.

The other reasons for page migration include but are not limited to:
* memory move due to CMA
* memory move due to huge pages creation

Hardly we can ask users to disable the COMPACTION, CMA and Huge Page
in the whole system.

On the other hand, numactl doesn't always bind memory to single NUMA
node, sometimes, while applications require many cpu, it could bind
more than one memory node.

Thanks
Barry




More information about the linux-arm-kernel mailing list