[PATCH] arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()
Robin Murphy
robin.murphy at arm.com
Wed Sep 7 02:27:45 PDT 2022
On 2022-09-07 10:03, Christoph Hellwig wrote:
> On Tue, Aug 23, 2022 at 01:21:11PM +0100, Will Deacon wrote:
>> arch_dma_prep_coherent() is called when preparing a non-cacheable region
>> for a consistent DMA buffer allocation. Since the buffer pages may
>> previously have been written via a cacheable mapping and consequently
>> allocated as dirty cachelines, the purpose of this function is to remove
>> these dirty lines from the cache, writing them back so that the
>> non-coherent device is able to see them.
>
> Yes.
>
>> I'm slightly wary about this change as other architectures seem to do
>> clean+invalidate here, but I'd like to hear what others think in any
>> case.
>
> If arm64 is fine with having clean but present cachelines when creating
> an uncached mapping for a cache line, the invalidate is not required.
>
> But isn't it better for the cache if these by definition useless
> cachelines get evicted?
>
> My biggest concern here is that we're now moving from consolidating
> these semantics in all the different architectures to different ones,
> making a centralization of the policies even harder.
FWIW I agree with Ard in not being entirely confident with this change.
The impression I had (which may be wrong) was that the architecture
never actually ruled out unexpected cache hits in the case of mismatched
attributes, it just quietly stopped mentioning it at all. And even if
the architecture did rule them out, how confident are we about errata
that might still allow them to happen?
It seems like we don't stand to gain much by removing the invalidation -
since the overhead will still be in the clean - other than the potential
for a slightly increased chance of rare and hard-to-debug memory
corruption :/
Cheers,
Robin.
(who's spent the last few months elbow-deep in a hideous CPU cache
erratum...)
More information about the linux-arm-kernel
mailing list