WARNING: drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c:364 vchiq_prepare_bulk_data
Robin Murphy
robin.murphy at arm.com
Tue Jun 11 06:35:16 PDT 2024
On 11/06/2024 12:37 pm, Stefan Wahren wrote:
> Am 11.06.24 um 13:08 schrieb Arnd Bergmann:
>> On Tue, Jun 11, 2024, at 12:47, Stefan Wahren wrote:
>>> Am 10.06.24 um 12:25 schrieb Robin Murphy:
>>>> On 2024-06-10 10:24 am, Phil Elwell wrote:
>>>>> On Mon, 10 Jun 2024 at 10:20, Arnd Bergmann <arnd at arndb.de> wrote:
>>>>>> Arnd
>>>>> vchiq sends partial cache lines at the start and of reads (as seen
>>>>> from the ARM host) out of band, so the only misaligned DMA transfers
>>>>> should be from ARM to VPU. This should not require a bounce buffer.
>>>> Hmm, indeed the dma_kmalloc_safe() takes into account that unaligned
>>>> DMA_TO_DEVICE does not need bouncing, so it would suggest that
>>>> something's off in what vchiq is asking for.
>>> I'm available to debug this further, but i need more guidance here.
>>>
>>> At least i extend the output for the error case:
>>>
>>> - WARN_ON(len == 0);
>>> - WARN_ON(i && (i != (dma_buffers - 1)) && (len &
>>> ~PAGE_MASK));
>>> - WARN_ON(i && (addr & ~PAGE_MASK));
>>> + if (len == 0)
>>> + pr_warn_once("%s: sg_dma_len() == 0\n",
>>> __func__);
>>> + else if (i && (i != (dma_buffers - 1)) && (len &
>>> ~PAGE_MASK))
>>> + pr_warn_once("%s: following block not page
>>> aligned\n", __func__);
>>> + else if (i && (addr & ~PAGE_MASK)) {
>>> + pr_warn_once("%s: block %u, DMA address %pad
>>> doesn't align with PAGE_MASK 0x%lx\n", __func__, i, &addr, PAGE_MASK);
>>> + pr_warn_once("sg_dma_is_swiotlb: %d, dma_flags:
>>> %x\n", sg_dma_is_swiotlb(sg), sg->dma_flags);
>>> + }
>>>
>>> Example result:
>>>
>>> [ 84.180527] create_pagelist: block 1, DMA address 0x00000000f5f74800
>>> doesn't align with PAGE_MASK 0xfffffffffffff000
>>> [ 84.180553] sg_dma_is_swiotlb: 0, dma_flags: 0
>>>
>>> Is this helpful?
>> It's interesting that this does not have the SG_DMA_SWIOTLB
>> flag set, as the theory so far was that an unaligned
>> user address is what caused this to bounce.
dma-direct doesn't use that flag (at least for now; potentially it might
offer a tiny micro-optimisation, but not enough to really matter). At
this point, len and pagelistinfo->dma_dir (as passed to dma_map_sg())
are what would have defined the bounce condition...
>> I think the most helpful bit of information that is
>> currently missing is the 'ubuf' and 'count' arguments
>> that got passed down from userspace into create_pagelist(),
>> to see what alignment they have in the failure case.
> Here is my attempt:
>
> if (len == 0)
> pr_warn_once("%s: sg_dma_len() == 0\n", __func__);
> else if (i && (i != (dma_buffers - 1)) && (len & ~PAGE_MASK))
> pr_warn_once("%s: following block not page aligned\n",
> __func__);
> else if (i && (addr & ~PAGE_MASK)) {
> pr_warn_once("%s: block %u, DMA address %pad doesn't align
> with PAGE_MASK 0x%lx\n", __func__, i, &addr, PAGE_MASK);
> pr_warn_once("sg_dma_is_swiotlb: %d, dma_flags: %x\n",
> sg_dma_is_swiotlb(sg), sg->dma_flags);
> pr_warn_once("type = %s\n", (type == PAGELIST_WRITE) ?
> "PAGELIST_WRITE" : "PAGELIST_READ");
> if (buf)
> pr_warn_once("buf = %p, count = %zu\n", buf, count);
> else
> pr_warn_once("ubuf = %p, count = %zu\n", ubuf, count);
> }
>
> Output:
>
> [ 66.184030] create_pagelist: block 1, DMA address 0x00000000f5fc7800
> doesn't align with PAGE_MASK 0xfffffffffffff000
> [ 66.184056] sg_dma_is_swiotlb: 0, dma_flags: 0
> [ 66.184063] type = PAGELIST_READ
...so from the dma_dir assignment, this would imply DMA_FROM_DEVICE,
(indicating vchiq writing to RAM, CPU reading from RAM) which with the
unaligned address below would indeed cause a bounce. However that
appears to contradict what Phil said is supposed to happen :/
Thanks,
Robin
> [ 66.184066] ubuf = 00000000266a70a7, count = 0
>>
>> Arnd
>
More information about the linux-arm-kernel
mailing list