Introduce a new helper framework for buffer synchronization
Maarten Lankhorst
maarten.lankhorst at canonical.com
Tue May 28 03:23:08 EDT 2013
Hey,
Op 28-05-13 04:49, Inki Dae schreef:
>
>> -----Original Message-----
>> From: Maarten Lankhorst [mailto:maarten.lankhorst at canonical.com]
>> Sent: Tuesday, May 28, 2013 12:23 AM
>> To: Inki Dae
>> Cc: 'Daniel Vetter'; 'Rob Clark'; 'linux-fbdev'; 'YoungJun Cho'; 'Kyungmin
>> Park'; 'myungjoo.ham'; 'DRI mailing list'; linux-arm-
>> kernel at lists.infradead.org; linux-media at vger.kernel.org
>> Subject: Re: Introduce a new helper framework for buffer synchronization
>>
>> Hey,
>>
>> Op 27-05-13 12:38, Inki Dae schreef:
>>> Hi all,
>>>
>>> I have been removed previous branch and added new one with more cleanup.
>>> This time, the fence helper doesn't include user side interfaces and
>> cache
>>> operation relevant codes anymore because not only we are not sure that
>>> coupling those two things, synchronizing caches and buffer access
>> between
>>> CPU and CPU, CPU and DMA, and DMA and DMA with fences, in kernel side is
>> a
>>> good idea yet but also existing codes for user side have problems with
>> badly
>>> behaved or crashing userspace. So this could be more discussed later.
>>>
>>> The below is a new branch,
>>>
>>> https://git.kernel.org/cgit/linux/kernel/git/daeinki/drm-
>> exynos.git/?h=dma-f
>>> ence-helper
>>>
>>> And fence helper codes,
>>>
>>> https://git.kernel.org/cgit/linux/kernel/git/daeinki/drm-
>> exynos.git/commit/?
>>> h=dma-fence-helper&id=adcbc0fe7e285ce866e5816e5e21443dcce01005
>>>
>>> And example codes for device driver,
>>>
>>> https://git.kernel.org/cgit/linux/kernel/git/daeinki/drm-
>> exynos.git/commit/?
>>> h=dma-fence-helper&id=d2ce7af23835789602a99d0ccef1f53cdd5caaae
>>>
>>> I think the time is not yet ripe for RFC posting: maybe existing dma
>> fence
>>> and reservation need more review and addition work. So I'd glad for
>> somebody
>>> giving other opinions and advices in advance before RFC posting.
>>>
>> NAK.
>>
>> For examples for how to handle locking properly, see Documentation/ww-
>> mutex-design.txt in my recent tree.
>> I could list what I believe is wrong with your implementation, but real
>> problem is that the approach you're taking is wrong.
> I just removed ticket stubs to show my approach you guys as simple as
> possible, and I just wanted to show that we could use buffer synchronization
> mechanism without ticket stubs.
The tickets have been removed in favor of a ww_context. Moving it in as a base primitive
allows more locking abuse to be detected, and makes some other things easier too.
> Question, WW-Mutexes could be used for all devices? I guess this has
> dependence on x86 gpu: gpu has VRAM and it means different memory domain.
> And could you tell my why shared fence should have only eight objects? I
> think we could need more than eight objects for read access. Anyway I think
> I don't surely understand yet so there might be my missing point.
Yes, ww mutexes are not limited in any way to x86. They're a locking mechanism.
When you acquired the ww mutexes for all buffer objects, all it does is say at
that point in time you have exclusively acquired the locks of all bo's.
After locking everything you can read the fence pointers safely, queue waits, and set a
new fence pointer on all reservation_objects. You only need a single fence
on all those objects, so 8 is plenty. Nonetheless this was a limitation of my
earlier design, and I'll dynamically allocate fence_shared in the future.
~Maarten
More information about the linux-arm-kernel
mailing list