[PATCH 2/5] kvx: Implement dcache invalidation primitive
Jules Maselbas
jmaselbas at kalray.eu
Tue Mar 2 11:44:48 GMT 2021
On Tue, Mar 02, 2021 at 09:40:50AM +0100, Ahmad Fatoum wrote:
> Hello,
>
> > +
> > +void kvx_dcache_invalidate_mem_area(uint64_t addr, int size)
> > +{
> > + /* if hwloop iterations cost < _K1_DCACHE_REFILL_PERCENT cache refill,
> > + * use hwloop, otherwise invalid the whole cache
> > + */
> > + if (size <
> > + (K1_DCACHE_REFILL_PERCENT * (K1_DCACHE_REFILL * K1_DCACHE_SIZE))
> > + / (100 * (K1_DCACHE_REFILL + K1_DCACHE_HWLOOP))) {
> > + /* number of lines that must be invalidated */
> > + int invalid_lines = ((addr + size) -
> > + (addr & (~(K1_DCACHE_LINE_SIZE - 1))));
> > +
> > + invalid_lines = invalid_lines / K1_DCACHE_LINE_SIZE
> > + + (0 != (invalid_lines % K1_DCACHE_LINE_SIZE));
> > + if (__builtin_constant_p(invalid_lines) && invalid_lines <= 2) {
>
> Note that currently this will always be false, because of lack of link time
> optimization. You could split this away the check into the header and leave the
> juicy parts here if you want to have this optimization.
>
Yes we can drop one branch, I am tempted to always invalidate the whole
cache and be done with it. I will send a new patch anyway.
> > + /* when inlining (and doing constant folding),
> > + * gcc is able to unroll small loops
> > + */
> > + int i;
> > +
> > + for (i = 0; i < invalid_lines; i++) {
> > + __builtin_kvx_dinvall((void *)(addr
> > + + i * K1_DCACHE_LINE_SIZE));
> > + }
> > + } else if (invalid_lines > 0) {
> > + __asm__ __volatile__ (
> > + "loopdo %1, 0f\n;;\n"
> > + "dinvall 0[%0]\n"
> > + "addd %0 = %0, %2\n;;\n"
> > + "0:\n"
> > + : "+r"(addr)
> > + : "r" (invalid_lines),
> > + "i" (K1_DCACHE_LINE_SIZE)
> > + : "ls", "le", "lc", "memory");
> > + }
> > + } else {
> > + __builtin_kvx_dinval();
> > + }
> > +}
> >
>
> --
> Pengutronix e.K. | |
> Steuerwalder Str. 21 | http://www.pengutronix.de/ |
> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
>
More information about the barebox
mailing list