[PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory

Logan Gunthorpe logang at deltatee.com
Thu Mar 1 10:04:32 PST 2018



On 28/02/18 08:56 PM, Benjamin Herrenschmidt wrote:
> On Thu, 2018-03-01 at 14:54 +1100, Benjamin Herrenschmidt wrote:
>> The problem is that acccording to him (I didn't double check the latest
>> patches) you effectively hotplug the PCIe memory into the system when
>> creating struct pages.
>>
>> This cannot possibly work for us. First we cannot map PCIe memory as
>> cachable. (Note that doing so is a bad idea if you are behind a PLX
>> switch anyway since you'd ahve to manage cache coherency in SW).
> 
> Note: I think the above means it won't work behind a switch on x86
> either, will it ?

This works perfectly fine on x86 behind a switch and we've tested it on 
multiple machines. We've never had an issue of running out of virtual 
space despite our PCI bars typically being located with an offset of 
56TB or more. The arch code on x86 also somehow figures out not to map 
the memory as cachable so that's not an issue (though, at this point, 
the CPU never accesses the memory so even if it were, it wouldn't affect 
anything).

We also had this working on ARM64 a while back but it required some out 
of tree ZONE_DEVICE patches and some truly horrid hacks to it's arch 
code to ioremap the memory into the page map.

You didn't mention what architecture you were trying this on.

It may make sense at this point to make this feature dependent on x86 
until more work is done to make it properly portable. Something like 
arch functions that allow adding IO memory pages to with a specific 
cache setting. Though, if an arch has such restrictive limits on the map 
size it would probably need to address that too somehow.

Thanks,

Logan



More information about the Linux-nvme mailing list