[PATCH 12/14] pcm043: reimplement lowlevel code in C
Jean-Christophe PLAGNIOL-VILLARD
plagnioj at jcrosoft.com
Tue Apr 27 22:35:39 EDT 2010
> +
> + __asm__ __volatile__("mrc p15, 0, %0, c1, c0, 0":"=r"(r));
> + r |= (1 << 11); /* Flow prediction (Z) */
> + r |= (1 << 22); /* unaligned accesses */
> + r |= (1 << 21); /* Low Int Latency */
> +
> + __asm__ __volatile__("mrc p15, 0, %0, c1, c0, 1":"=r"(s));
> + s |= 0x7;
> + __asm__ __volatile__("mcr p15, 0, %0, c1, c0, 1" : : "r"(s));
> +
> + __asm__ __volatile__("mcr p15, 0, %0, c1, c0, 0" : : "r"(r));
why not use set_cr and get_cr when it's possible and the CR_x macro?
> +
> + r = 0;
> + __asm__ __volatile__("mcr p15, 0, %0, c15, c2, 4" : : "r"(r));
> +
> + /*
> + * Branch predicition is now enabled. Flush the BTAC to ensure a valid
> + * starting point. Don't flush BTAC while it is disabled to avoid
> + * ARM1136 erratum 408023.
> + */
> + __asm__ __volatile__("mcr p15, 0, %0, c7, c5, 6" : : "r"(r));
> +
> + /* invalidate I cache and D cache */
> + __asm__ __volatile__("mcr p15, 0, %0, c7, c7, 0" : : "r"(r));
> +
> + /* invalidate TLBs */
> + __asm__ __volatile__("mcr p15, 0, %0, c8, c7, 0" : : "r"(r));
> +
> + /* Drain the write buffer */
> + __asm__ __volatile__("mcr p15, 0, %0, c7, c10, 4" : : "r"(r));
is it really board specific or arch specific?
Best Regards,
J.
More information about the barebox
mailing list