WARNING: drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c:364 vchiq_prepare_bulk_data

Stefan Wahren wahrenst at gmx.net
Tue Jun 11 04:37:11 PDT 2024


Am 11.06.24 um 13:08 schrieb Arnd Bergmann:
> On Tue, Jun 11, 2024, at 12:47, Stefan Wahren wrote:
>> Am 10.06.24 um 12:25 schrieb Robin Murphy:
>>> On 2024-06-10 10:24 am, Phil Elwell wrote:
>>>> On Mon, 10 Jun 2024 at 10:20, Arnd Bergmann <arnd at arndb.de> wrote:
>>>>>        Arnd
>>>> vchiq sends partial cache lines at the start and of reads (as seen
>>>> from the ARM host) out of band, so the only misaligned DMA transfers
>>>> should be from ARM to VPU. This should not require a bounce buffer.
>>> Hmm, indeed the dma_kmalloc_safe() takes into account that unaligned
>>> DMA_TO_DEVICE does not need bouncing, so it would suggest that
>>> something's off in what vchiq is asking for.
>> I'm available to debug this further, but i need more guidance here.
>>
>> At least i extend the output for the error case:
>>
>> -               WARN_ON(len == 0);
>> -               WARN_ON(i && (i != (dma_buffers - 1)) && (len &
>> ~PAGE_MASK));
>> -               WARN_ON(i && (addr & ~PAGE_MASK));
>> +               if (len == 0)
>> +                       pr_warn_once("%s: sg_dma_len() == 0\n", __func__);
>> +               else if (i && (i != (dma_buffers - 1)) && (len &
>> ~PAGE_MASK))
>> +                       pr_warn_once("%s: following block not page
>> aligned\n", __func__);
>> +               else if (i && (addr & ~PAGE_MASK)) {
>> +                       pr_warn_once("%s: block %u, DMA address %pad
>> doesn't align with PAGE_MASK 0x%lx\n", __func__, i, &addr, PAGE_MASK);
>> +                       pr_warn_once("sg_dma_is_swiotlb: %d, dma_flags:
>> %x\n", sg_dma_is_swiotlb(sg), sg->dma_flags);
>> +               }
>>
>> Example result:
>>
>> [   84.180527] create_pagelist: block 1, DMA address 0x00000000f5f74800
>> doesn't align with PAGE_MASK 0xfffffffffffff000
>> [   84.180553] sg_dma_is_swiotlb: 0, dma_flags: 0
>>
>> Is this helpful?
> It's interesting that this does not have the SG_DMA_SWIOTLB
> flag set, as the theory so far was that an unaligned
> user address is what caused this to bounce.
>
> I think the most helpful bit of information that is
> currently missing is the 'ubuf' and 'count' arguments
> that got passed down from userspace into create_pagelist(),
> to see what alignment they have in the failure case.
Here is my attempt:

         if (len == 0)
             pr_warn_once("%s: sg_dma_len() == 0\n", __func__);
         else if (i && (i != (dma_buffers - 1)) && (len & ~PAGE_MASK))
             pr_warn_once("%s: following block not page aligned\n",
__func__);
         else if (i && (addr & ~PAGE_MASK)) {
             pr_warn_once("%s: block %u, DMA address %pad doesn't align
with PAGE_MASK 0x%lx\n", __func__, i, &addr, PAGE_MASK);
             pr_warn_once("sg_dma_is_swiotlb: %d, dma_flags: %x\n",
sg_dma_is_swiotlb(sg), sg->dma_flags);
             pr_warn_once("type = %s\n", (type == PAGELIST_WRITE) ?
"PAGELIST_WRITE" : "PAGELIST_READ");
             if (buf)
                 pr_warn_once("buf = %p, count = %zu\n", buf, count);
             else
                 pr_warn_once("ubuf = %p, count = %zu\n", ubuf, count);
         }

Output:

[   66.184030] create_pagelist: block 1, DMA address 0x00000000f5fc7800
doesn't align with PAGE_MASK 0xfffffffffffff000
[   66.184056] sg_dma_is_swiotlb: 0, dma_flags: 0
[   66.184063] type = PAGELIST_READ
[   66.184066] ubuf = 00000000266a70a7, count = 0
>
>       Arnd




More information about the linux-arm-kernel mailing list