[PATCH v4 1/5] lib/bitmap: add bitmap_{set,get}_value()
Yury Norov
yury.norov at gmail.com
Fri Aug 4 12:55:01 PDT 2023
On Fri, Aug 04, 2023 at 06:07:00PM +0200, Alexander Potapenko wrote:
> > space >= nbits <=>
> > BITS_PER_LONG - offset >= nbits <=>
> > offset + nbits <= BITS_PER_LONG
> >
> > > map[index] &= (fit ? (~(GENMASK(nbits - 1, 0) << offset)) :
> >
> > So here GENMASK(nbits + offset - 1, offset) is at max:
> > GENMASK(BITS_PER_LONG - 1, offset). And it never overflows, which is my
> > point. Does it make sense?
>
> It indeed does. Perhaps pulling offset inside GENMASK is not a bug
> after all (a simple test does not show any difference between their
> behavior.
> But `GENMASK(nbits - 1 + offset, offset)` blows up the code (see below).
> My guess is that this happens because the compiler fails to reuse the
> value of `GENMASK(nbits - 1, 0)` used to clamp the value to write, and
> calculates `GENMASK(nbits - 1 + offset, offset)` from scratch.
OK. Can you put a comment explaining this? Or maybe would be even
better to use BITMAP_LAST_WORD_MASK() here:
mask = BITMAP_LAST_WORD_MASK(nbits);
value &= mask;
...
map[index] &= (fit ? (~mask << offset)) :
> > > ~BITMAP_FIRST_WORD_MASK(start));
> >
> > As I said, ~BITMAP_FIRST_WORD_MASK() is the same as BITMAP_LAST_WORD_MASK()
> > and vise-versa.
>
> Surprisingly, ~BITMAP_FIRST_WORD_MASK() generates better code than
> BITMAP_LAST_WORD_MASK().
Wow... If that's consistent across different compilers/arches, we'd
just drop the latter. Thanks for pointing that. I'll check.
> > > map[index] |= value << offset;
> > > if (fit)
> > > return;
> > >
> > > map[index + 1] &= ~BITMAP_LAST_WORD_MASK(start + nbits);
>
> OTOH I managed to shave three more bytes off by replacing
> ~BITMAP_LAST_WORD_MASK with a BITMAP_FIRST_WORD_MASK here.
>
> > > map[index + 1] |= (value >> space);
> > > }
>
> I'll post the implementations together with the disassembly below.
> I used some Clang 17.0.0 version that is a couple months behind
> upstream, but that still produces sustainably shorter code (~48 bytes
> less) than the trunk GCC on Godbolt.
>
> 1. Original implementation of bitmap_write() from this patch - 164
> bytes (interestingly, it's 157 bytes with Clang 14.0.6)
I spotted that too in some other case. Newer compilers tend to
generate bigger code, but the result usually works faster. One
particular reason for my case was a loop unrolling.
[...]
> 3. My improved version built on top of yours and mentioned above under
> the name bitmap_write_new() - 116 bytes:
30% better in size - that's impressive!
> ==================================================================
> void bitmap_write_new(unsigned long *map, unsigned long value,
> unsigned long start, unsigned long nbits)
> {
> unsigned long offset;
> unsigned long space;
> size_t index;
> bool fit;
>
> if (unlikely(!nbits))
> return;
>
> value &= GENMASK(nbits - 1, 0);
> offset = start % BITS_PER_LONG;
> space = BITS_PER_LONG - offset;
> index = BIT_WORD(start);
> fit = space >= nbits;
>
> map[index] &= (fit ? (~(GENMASK(nbits - 1, 0) << offset)) :
> ~BITMAP_FIRST_WORD_MASK(start));
> map[index] |= value << offset;
> if (fit)
> return;
>
> map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits);
> map[index + 1] |= (value >> space);
> }
Thanks,
Yury
More information about the linux-arm-kernel
mailing list