[PATCH v2] arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)
Catalin Marinas
catalin.marinas at arm.com
Thu Mar 1 03:50:07 PST 2018
Hi Robin,
On Wed, Feb 28, 2018 at 07:18:47PM +0000, Robin Murphy wrote:
> On 28/02/18 18:47, Catalin Marinas wrote:
> > +static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
> > +{
> > + if (!dev->dma_mask)
> > + return false;
> > +
> > + /*
> > + * Force swiotlb buffer bouncing when ARCH_DMA_MINALIGN < CWG. The
> > + * swiotlb bounce buffers are aligned to (1 << IO_TLB_SHIFT).
> > + */
>
> The relevance of the second half of that comment isn't entirely obvious - I
> assume you're referring to the fact that the IOTLB slab size happens to
> conveniently match the largest possible CWG?
Yes, that's the idea. I could have added "are /sufficiently/ aligned",
though it doesn't make it much clearer.
> I wonder somewhat if it's worth going even further down the ridiculously
> over-cautious route and adding a BUILD_BUG_ON(IO_TLB_SHIFT < 11), just so
> we'd get a heads-up in future if this could otherwise become silently
> broken...
I wouldn't bother as we should be ok with smaller IO_TLB_SHIFT. Also, if
CWG is zero, we assume ARCH_DMA_MINALIGN in Linux rather than the
architectural maximum of 2K.
Thanks for reviewing.
--
Catalin
More information about the linux-arm-kernel
mailing list