using DMA-API on ARM
Arnd Bergmann
arnd at arndb.de
Mon Dec 8 08:38:57 PST 2014
On Monday 08 December 2014 17:22:44 Arend van Spriel wrote:
> >> The log: first the ring allocation info is printed. Starting at
> >> 16.124847, ring 2, 3 and 4 are rings used for device to host. In this
> >> log the failure is on a read of ring 3. Ring 3 is 1024 entries of each
> >> 16 bytes. The next thing printed is the kernel page tables. Then some
> >> OpenWRT info and the logging of part of the connection setup. Then at
> >> 1780.130752 the logging of the failure starts. The sequence number is
> >> modulo 253 with ring size of 1024 matches an "old" entry (read 40,
> >> expected 52). Then the different pointers are printed followed by
> >> the kernel page table. The code does then a cache invalidate on the
> >> dma_handle and the next read the sequence number is correct.
> >
> > How do you invalidate the cache? A dma_handle is of type dma_addr_t
> > and we don't define an operation for that, nor does it make sense
> > on an allocation from dma_alloc_coherent(). What happens if you
> > take out the invalidate?
>
> dma_sync_single_for_cpu(, DMA_FROM_DEVICE) which ends up invalidating
> the cache (or that is our suspicion).
I'm not sure about that:
static void arm_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
unsigned int offset = handle & (PAGE_SIZE - 1);
struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
__dma_page_dev_to_cpu(page, offset, size, dir);
}
Assuming a noncoherent linear (no IOMMU, no swiotlb, no dmabounce) mapping,
dma_to_pfn will return the correct pfn here, but pfn_to_page will return a
page pointer into the kernel linear mapping, which is not the same
as the pointer you get from __alloc_remap_buffer(). The pointer that
was returned from dma_alloc_coherent is a) non-cachable, and b) not the
same that you flush here.
Arnd
More information about the linux-arm-kernel
mailing list