[RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
Logan Gunthorpe
logang at deltatee.com
Tue Apr 18 13:48:03 PDT 2017
On 18/04/17 02:31 PM, Dan Williams wrote:
> On Tue, Apr 18, 2017 at 1:29 PM, Jerome Glisse <jglisse at redhat.com> wrote:
>>> On Tue, Apr 18, 2017 at 12:35 PM, Logan Gunthorpe <logang at deltatee.com>
>>> wrote:
>>>>
>>>>
>>>> On 18/04/17 01:01 PM, Jason Gunthorpe wrote:
>>>>> Ultimately every dma_ops will need special code to support P2P with
>>>>> the special hardware that ops is controlling, so it makes some sense
>>>>> to start by pushing the check down there in the first place. This
>>>>> advice is partially motivated by how dma_map_sg is just a small
>>>>> wrapper around the function pointer call...
>>>>
>>>> Yes, I noticed this problem too and that makes sense. It just means
>>>> every dma_ops will probably need to be modified to either support p2p
>>>> pages or fail on them. Though, the only real difficulty there is that it
>>>> will be a lot of work.
>>>
>>> I don't think you need to go touch all dma_ops, I think you can just
>>> arrange for devices that are going to do dma to get redirected to a
>>> p2p aware provider of operations that overrides the system default
>>> dma_ops. I.e. just touch get_dma_ops().
>>
>> This would not work well for everyone, for instance on GPU we usualy
>> have buffer object with a mix of device memory and regular system
>> memory but call dma sg map once for the list.
>>
>
> ...and that dma_map goes through get_dma_ops(), so I don't see the conflict?
The main conflict is in dma_map_sg which only does get_dma_ops once but
the sg may contain memory of different types.
Logan
More information about the Linux-nvme
mailing list