[PATCH 1/3] riscv: optimized memcpy

David Laight David.Laight at ACULAB.COM
Fri Jun 18 01:32:14 PDT 2021

From: Matteo Croce
> Sent: 18 June 2021 02:05
> > > It's running at 1 GHz.
> > >
> > > I get 257 Mb/s with a memcpy, a bit more with a memset,
> > > but I get 1200 Mb/s with a cyle which just reads memory with 64 bit addressing.
> > >
> >
> > Err, I forget a mlock() before accessing the memory in userspace.

What is the mlock() for?
The data for a quick loop won't get paged out.
You want to test cache to cache copies, so the first loop
will always be slow.
After that each iteration should be much the same.
I use code like:
	for (;;) {
		start = read_tsc();
		histogram[(read_tsc() - start) >> n]++
(You need to exclude outliers)
to get a distribution for the execution times.
Tends to be pretty stable - even though different program
runs can give different values!
> > The real speed here is:
> >
> > 8 bit read: 155.42 Mb/s
> > 64 bit read: 277.29 Mb/s
> > 8 bit write: 138.57 Mb/s
> > 64 bit write: 239.21 Mb/s
> >
> Anyway, thanks for the info on nio2 timings.
> If you think that an unrolled loop would help, we can achieve the same in C.
> I think we could code something similar to a Duff device (or with jump
> labels) to unroll the loop but at the same time doing efficient small copies.

Unrolling has to be done with care.
It tends to improve benchmarks, but the extra code displaces
other code from the i-cache and slows down overall performance.
So you need 'just enough' unrolling to avoid cpu stalls.

On your system it looks like the memory/cache subsystem
is the bottleneck for the tests you are doing.
I'd really expect a 1GHz cpu to be able to read/write from
its data cache every clock.
So I'd expect transfer rates nearer 8000 MB/s, not 250 MB/s.


Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

More information about the linux-riscv mailing list