[PATCH 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy()
Will Deacon
will at kernel.org
Wed Feb 21 11:22:06 PST 2024
On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> +static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to,
> + const u64 *from, size_t count)
> +{
> + switch (count) {
> + case 8:
> + asm volatile("str %x0, [%8, #8 * 0]\n"
> + "str %x1, [%8, #8 * 1]\n"
> + "str %x2, [%8, #8 * 2]\n"
> + "str %x3, [%8, #8 * 3]\n"
> + "str %x4, [%8, #8 * 4]\n"
> + "str %x5, [%8, #8 * 5]\n"
> + "str %x6, [%8, #8 * 6]\n"
> + "str %x7, [%8, #8 * 7]\n"
> + :
> + : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> + "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
> + "rZ"(from[6]), "rZ"(from[7]), "r"(to));
> + break;
> + case 4:
> + asm volatile("str %x0, [%4, #8 * 0]\n"
> + "str %x1, [%4, #8 * 1]\n"
> + "str %x2, [%4, #8 * 2]\n"
> + "str %x3, [%4, #8 * 3]\n"
> + :
> + : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> + "rZ"(from[3]), "r"(to));
> + break;
> + case 2:
> + asm volatile("str %x0, [%2, #8 * 0]\n"
> + "str %x1, [%2, #8 * 1]\n"
> + :
> + : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
> + break;
> + case 1:
> + __raw_writel(*from, to);
Shouldn't this be __raw_writeq?
Will
More information about the linux-arm-kernel
mailing list