[RFC PATCH v2 08/11] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

Logan Gunthorpe logang at deltatee.com
Fri Mar 12 20:06:34 GMT 2021



On 2021-03-12 12:47 p.m., Robin Murphy wrote:
>>>>    {
>>>>        struct scatterlist *s, *cur = sg;
>>>>        unsigned long seg_mask = dma_get_seg_boundary(dev);
>>>> @@ -864,6 +865,20 @@ static int __finalise_sg(struct device *dev,
>>>> struct scatterlist *sg, int nents,
>>>>            sg_dma_address(s) = DMA_MAPPING_ERROR;
>>>>            sg_dma_len(s) = 0;
>>>>    +        if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
>>>> +            if (i > 0)
>>>> +                cur = sg_next(cur);
>>>> +
>>>> +            sg_dma_address(cur) = sg_phys(s) + s->offset -
>>>
>>> Are you sure about that? ;)
>>
>> Do you see a bug? I don't follow you...
> 
> sg_phys() already accounts for the offset, so you're adding it twice.

Ah, oops. Nice catch. I missed that.

> 
>>>> +                pci_p2pdma_bus_offset(sg_page(s));
>>>
>>> Can the bus offset make P2P addresses overlap with regions of mem space
>>> that we might use for regular IOVA allocation? That would be very bad...
>>
>> No. IOMMU drivers already disallow all PCI addresses from being used as
>> IOVA addresses. See, for example,  dmar_init_reserved_ranges(). It would
>> be a huge problem for a whole lot of other reasons if it didn't.
> 
> I know we reserve the outbound windows (largely *because* some host 
> bridges will consider those addresses as attempts at unsupported P2P and 
> prevent them working), I just wanted to confirm that this bus offset is 
> always something small that stays within the relevant window, rather 
> than something that might make a BAR appear in a completely different 
> place for P2P purposes. If so, that's good.

Yes, well if an IOVA overlaps with any PCI bus address there's going to 
be some difficult brokenness because when the IOVA is used it might be 
directed to the a PCI device and not the IOMMU. I fixed a bug like that 
once.
>>> I'm not really thrilled about the idea of passing zero-length segments
>>> to iommu_map_sg(). Yes, it happens to trick the concatenation logic in
>>> the current implementation into doing what you want, but it feels 
>>> fragile.
>>
>> We're not passing zero length segments to iommu_map_sg() (or any
>> function). This loop is just scanning to calculate the length of the
>> required IOVA. __finalise_sg() (which is intimately tied to this loop)
>> then needs a way to determine which segments were P2P segments. The
>> existing code already overwrites s->length with an aligned length and
>> stores the original length in sg_dma_len. So we're not relying on
>> tricking any logic here.
> 
> Yes, we temporarily shuffle in page-aligned quantities to satisfy the 
> needs of the iommu_map_sg() call, before unpacking things again in 
> __finalise_sg(). It's some disgusting trickery that I'm particularly 
> proud of. My point is that if you have a mix of both p2p and normal 
> segments - which seems to be a case you want to support - then the 
> length of 0 that you set to flag p2p segments here will be seen by 
> iommu_map_sg() (as it walks the list to map the other segments) before 
> you then use it as a key to override the DMA address in the final step. 
> It's not a concern if you have a p2p-only list and short-circuit 
> straight to that step (in which case all the shuffling was wasted effort 
> anyway), but since it's not entirely clear what a segment with zero 
> length would mean in general, it seems like a good idea to avoid passing 
> the list across a public boundary in that state, if possible.

Ok, well, I mean the iommu_map_sg() does the right thing as is without 
changing it and IMO sg->length set to zero does make sense. Supporting 
mixed P2P and normal segments is really the whole point of this series 
(the current kernel supports homogeneous SGLs with a specialized path -- 
see pci_p2pdma_unmap_sg_attrs()). But do you have an alternate solution 
for sg->length?

Logan



More information about the Linux-nvme mailing list