[PATCH 01/18] PCI/P2PDMA: refactor the p2pdma mapping helpers

Bjorn Helgaas helgaas at kernel.org
Mon Oct 28 13:59:02 PDT 2024


Prefer subject capitalization in drivers/pci:

  PCI/P2PDMA: Refactor ...

On Sun, Oct 27, 2024 at 04:21:01PM +0200, Leon Romanovsky wrote:
> From: Christoph Hellwig <hch at lst.de>
> 
> The current scheme with a single helper to determine the P2P status
> and map a scatterlist segment force users to always use the map_sg
> helper to DMA map, which we're trying to get away from because they
> are very cache inefficient.
> ...

Acked-by: Bjorn Helgaas <bhelgaas at google.com>

A couple minor nits below.

> @@ -1412,28 +1411,29 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
>  		size_t s_length = s->length;
>  		size_t pad_len = (mask - iova_len + 1) & mask;
>  
> -		if (is_pci_p2pdma_page(sg_page(s))) {
> -			map = pci_p2pdma_map_segment(&p2pdma_state, dev, s);
> -			switch (map) {
> -			case PCI_P2PDMA_MAP_BUS_ADDR:
> -				/*
> -				 * iommu_map_sg() will skip this segment as
> -				 * it is marked as a bus address,
> -				 * __finalise_sg() will copy the dma address
> -				 * into the output segment.
> -				 */
> -				continue;
> -			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> -				/*
> -				 * Mapping through host bridge should be
> -				 * mapped with regular IOVAs, thus we
> -				 * do nothing here and continue below.
> -				 */
> -				break;
> -			default:
> -				ret = -EREMOTEIO;
> -				goto out_restore_sg;
> -			}
> +		switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(s))) {
> +		case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> +			/*
> +			 * Mapping through host bridge should be mapped with
> +			 * regular IOVAs, thus we do nothing here and continue
> +			 * below.
> +			 */

I guess this is technically not a fall-through to the next case
because there's no executable code here, but since the comment
separates these two cases, I would find it easier to read if you
included the break here explicitly.

> +		case PCI_P2PDMA_MAP_NONE:
> +			break;

> +void __pci_p2pdma_update_state(struct pci_p2pdma_map_state *state,
> +		struct device *dev, struct page *page);
> +
> +/**
> + * pci_p2pdma_state - check the P2P transfer state of a page
> + * @state: 	P2P state structure

Checkpatch complains about space before tab here.

> + * pci_p2pdma_bus_addr_map - map a PCI_P2PDMA_MAP_BUS_ADDR P2P transfer
> + * @state: 	P2P state structure

And here.

> @@ -462,34 +462,32 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
>  		enum dma_data_direction dir, unsigned long attrs)
>  {
>  	struct pci_p2pdma_map_state p2pdma_state = {};
> -	enum pci_p2pdma_map_type map;
>  	struct scatterlist *sg;
>  	int i, ret;
>  
>  	for_each_sg(sgl, sg, nents, i) {
> -		if (is_pci_p2pdma_page(sg_page(sg))) {
> -			map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg);
> -			switch (map) {
> -			case PCI_P2PDMA_MAP_BUS_ADDR:
> -				continue;
> -			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> -				/*
> -				 * Any P2P mapping that traverses the PCI
> -				 * host bridge must be mapped with CPU physical
> -				 * address and not PCI bus addresses. This is
> -				 * done with dma_direct_map_page() below.
> -				 */
> -				break;
> -			default:
> -				ret = -EREMOTEIO;
> +		switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(sg))) {
> +		case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> +			/*
> +			 * Any P2P mapping that traverses the PCI host bridge
> +			 * must be mapped with CPU physical address and not PCI
> +			 * bus addresses.
> +			 */

Same fall-through comment.

> +		case PCI_P2PDMA_MAP_NONE:
> +			sg->dma_address = dma_direct_map_page(dev, sg_page(sg),
> +					sg->offset, sg->length, dir, attrs);
> +			if (sg->dma_address == DMA_MAPPING_ERROR) {
> +				ret = -EIO;
>  				goto out_unmap;
>  			}
> -		}



More information about the Linux-nvme mailing list