[PATCH v2 4/6] spi: davinci: flush caches when performing DMA

Vignesh R vigneshr at ti.com
Mon Feb 20 21:08:27 PST 2017



On Monday 20 February 2017 09:19 PM, Frode Isaksen wrote:
> 
> 
> On 20/02/2017 12:27, Vignesh R wrote:
>>
>> On Monday 20 February 2017 04:04 PM, Frode Isaksen wrote:
>>>
>>> On 20/02/2017 10:47, Vignesh R wrote:
>>>> On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote:
>>>>> On 20/02/2017 07:55, Vignesh R wrote:
>>>>>> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote:
>> [...]
>>>>>> I am debugging another issue with UBIFS wherein pages allocated by
>>>>>> vmalloc are in highmem region that are not addressable using 32 bit
>>>>>> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these
>>>>>> buffers at all.
>>>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the
>>>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of
>>>>>> dma_map_sg() call). This results in random crashes as DMA starts
>>>>>> accessing random memory during SPI read.
>>>>>>
>>>>>> Given, the above problem and also issue surrounding VIVT caches, I am
>>>>>> thinking of may be using pre-allocated fixed size bounce buffer to
>>>>>> handle buffers not in lowmem mapping.
>>>>>> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI
>>>>>> running at 76.8MHz and do not see any significant degradation in
>>>>>> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd
>>>>>> buffers only during initial preparing and mounting phase and not during
>>>>>> file read/write.
>>>>> I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ?
>>>> read_ltab() isn't the only place where vmalloc() is used. A quick grep
>>>> for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every
>>>> vmalloc() call can potentially allocate memory from highmem and might
>>>> potentially cause issue for VIVT and such aliasing caches.
>>>> Fixing just one such case isn't going to help IMHO.
>>> Of course fixing it only in one place is of course not going to help..
>>> For the moment there are 3 solutions to the UBIFS DMA problem:
>>> 1) Always use bounce buffer for vmalloc'ed buffers - impacts everyone.
>>> 2) Remove use of vmalloc'ed buffers in UBIFS - is it possible ?
>> Maybe. But what about mtdblock with JFFS2 on top of it, mtdblock still
>> uses vmalloc'd buffers?
> That's why I put the cache flush in the SPI driver in the first try..
> FYI, I replaced the use of vmalloc with kmalloc in a few places in the UBIFS layer and it seems to work. The size of the buffers allocated varies from 11 (?) to 65408 bytes.

FYI, here is the discussion on DMA + UBIFS:
http://lists.infradead.org/pipermail/linux-mtd/2016-October/069856.html

> Maybe the best place to handle this is in the SPI flash drivers(s). These drivers knows that they may handle vmalloc'ed buffers and the should assume that the underlying SPI driver may use DMA.
> 

I guess so, NAND core already seems to use bounce buffer for vmalloc'd
addresses.


-- 
Regards
Vignesh



More information about the linux-arm-kernel mailing list