[PATCH 11/20] dma: support marking SRAM for coherent DMA use
Ahmad Fatoum
a.fatoum at pengutronix.de
Mon Jun 7 00:40:23 PDT 2021
Hi,
On 07.06.21 09:34, Sascha Hauer wrote:
> On Mon, May 31, 2021 at 09:38:12AM +0200, Ahmad Fatoum wrote:
>> The RISC-V architecture allows overriding the dma_alloc_coherent and
>> dma_free_coherent. Allow this to be controlled by device tree.
>>
>> Cache-coherent SoCs won't need this, but incoherent ones that have
>> uncached regions can register them here.
>>
>> Signed-off-by: Ahmad Fatoum <a.fatoum at pengutronix.de>
>> ---
>> +static void *pool_alloc_coherent(size_t size, dma_addr_t *dma_handle)
>> +{
>> + struct dma_coherent_pool *pool;
>> + void *ret = NULL;
>> +
>> + list_for_each_entry(pool, &pools, list) {
>> + ret = tlsf_memalign(pool->handle, DMA_ALIGNMENT, size);
>> + if (!ret)
>> + continue;
>> + }
>> +
>> + BUG_ON(!ret);
>
> Being out of memory is no bug, no?
It's for dma_alloc_coherent. Other archs use xmemalign and have no handling for
error cases.
>
>> +
>> + memset(ret, 0, size);
>> +
>> + if (dma_handle)
>> + *dma_handle = (dma_addr_t)ret;
>> +
>> + pr_debug("alloc(%zu) == %p\n", size, ret);
>> +
>> + return ret;
>> +}
>> +
>> +static void pool_free_coherent(void *vaddr, dma_addr_t dma_handle, size_t size)
>> +{
>> + resource_size_t addr = (resource_size_t)vaddr;
>> + struct dma_coherent_pool *pool;
>> +
>> + list_for_each_entry(pool, &pools, list) {
>> + if (pool->resource->start <= addr && addr <= pool->resource->end) {
>
> Nice :)
> I would have written if (addr >= start && addr <= end), but the way you
> have written it makes it visually clear from the first sight that addr
> should be in that specific range.
Since I posted this series, someone nudged me into a better direction:
Set dma_addr to the cached alias address, which is < 32-bit and return
from alloc_coherent a > 32-bit address within the uncached alias.
As the devices aren't cache coherent, it doesn't matter that the alias they
get is the uncached one.
>
>> + tlsf_free(pool->handle, vaddr);
>> + return;
>> + }
>> + }
>> +
>> + pr_warn("freeing invalid region: %p\n", vaddr);
>> +}
>> +
>> +static const struct dma_coherent_ops pool_ops = {
>> + .alloc = pool_alloc_coherent,
>> + .free = pool_free_coherent,
>> +};
>> +
>> +static int compare_pool_sizes(struct list_head *_a, struct list_head *_b)
>> +{
>> + struct dma_coherent_pool *a = list_entry(_a, struct dma_coherent_pool, list);
>> + struct dma_coherent_pool *b = list_entry(_b, struct dma_coherent_pool, list);
>> +
>> + if (resource_size(a->resource) > resource_size(b->resource))
>> + return 1;
>> + if (resource_size(a->resource) < resource_size(b->resource))
>> + return -1;
>> + return 0;
>> +}
>> +
>> +static int dma_declare_coherent_pool(const struct resource *res)
>> +{
>> + struct dma_coherent_pool *pool;
>> + tlsf_t handle;
>> +
>> + handle = tlsf_create_with_pool((void *)res->start, resource_size(res));
>> + if (!handle)
>> + return -EINVAL;
>> +
>> + pool = xmalloc(sizeof(*pool));
>
> Better xzalloc()? It's too easy to add some element to a structure and
> assume that it's initialized.
>
>> + pool->handle = handle;
>> + pool->resource = res;
>> +
>> + list_add_sort(&pool->list, &pools, compare_pool_sizes);
>
> The pools are sorted by their size, but is this a good criterion for the
> pools priority?
The idea was to have some fixed order, so issues are easier to debug.
With the changes described above, this commit can be replaced.
(The dma_set_ops one before will remain).
Cheers,
Ahmad
>
> Sascha
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
More information about the barebox
mailing list