Add --skip-all-ffs option to mtd-utils nandwrite
Marek Vasut
marek.vasut at gmail.com
Wed Dec 7 20:14:09 PST 2016
On 12/07/2016 05:59 PM, Boris Brezillon wrote:
> On Wed, 7 Dec 2016 17:31:04 +0100
> Marek Vasut <marek.vasut at gmail.com> wrote:
>
>> On 12/07/2016 05:07 PM, Boris Brezillon wrote:
>>> On Wed, 7 Dec 2016 15:09:49 +0100
>>> Marek Vasut <marek.vasut at gmail.com> wrote:
>>>
>>>> On 12/07/2016 09:21 AM, David Oberhollenzer wrote:
>>>>> On 12/07/2016 06:01 AM, Marek Vasut wrote:
>>>>>> + if (skipallffs)
>>>>>> + {
>>>>>> + for (ii = 0; ii < mtd.min_io_size; ii +=
>>>>>> sizeof(uint32_t))
>>>>>> + {
>>>>>> + if (*(uint32_t*)(writebuf + ii) !=
>>>>>> 0xffffffff)
>>>>>> + break;
>>>>>>
>>>>>> Is this memcmp()-alike function ?
>>>>>>
>>>>>> + }
>>>>>> + if (ii == mtd.min_io_size)
>>>>>> + allffs = 1;
>>>>>> + }
>>>>>> + if (!allffs) {
>>>>> The point of the patch is not to write a page that is filled with 0xFF bytes.
>>>>> A feature that is turned off by default and can be activated via a command
>>>>> line switch.
>>>>
>>>> That much I figured out
>>>>
>>>>> The code you are trying to replace tries to figure out if an entire block of
>>>>> memory is set to 0xFF. The code path should be conditional, as this feature
>>>>> is turned off unless explicitly requested.
>>>>
>>>> And to do that, we need to open-code it like above ?
>>>> Looks pretty crappy.
>>>
>>> I agree with Marek on one point: we're likely to need the 0xff-pattern
>>> check in other places, so how about implementing a helper (or several
>>> helpers) in libmtd?
>>
>> Helper would be great, yeah.
>>
>>>>>> This could be simply turned to:
>>>>>>
>>>>>> ret = 0;
>>>>>> if (!memcmp(writebuf, writebuf + 1, mtd.min_io_size - 1)) {
>>>>>> ret = mtd_write(....);
>>>>>> }
>>>
>>> Oh, nice trick! It works with and additional writebuf[0] == 0xff test.
>>> However I'm concerned about readability (can be addressed with a comment
>>> explaining what you're doing)
>>
>> Yeah, it's not exactly obvious. You need the writebuf[0] == pattern test
>> and bufsize >= 2 test.
>>
>>> , and perfs (memcmp is likely to be
>>> optimized for aligned accesses, and you defeat that with your writebuf
>>> + 1).
>>
>> That's up to libc optimization, really. memcpy() implementation in glibc
>> and musl at least does a lot of magic to optimize copying regardless of
>> alignment.
>>
>>> Here's a proposal for the pattern comparison API:
>>>
>>> struct pattern_buf {
>>> unsigned int size;
>>> void *buf;
>>> }
>>>
>>> const struct pattern_buf *mtd_create_pattern_buf(int size, u8 pattern);
>>> void mtd_free_pattern_buf(const struct pattern_buf *pbuf);
>>> bool mtd_check_pattern(const struct pattern_buf *pbuf, void *buf,
>>> int size);
>>>
>>> This way you can allocate a pattern buffer of pagesize and re-use it for
>>> each new page you want to check. This also solves the unaligned
>>> memcmp() problem.
>>
>> Here you're filling memory with stuff, which for large buffer will
>> trigger IO to main memory, which is slow. This is unnecessary since
>> you already have such data available to you (it's your own buffer and
>> it's in cache), just use it.
>
> Indeed, I didn't consider caches, and as you said, memcmp is likely to
> be optimized for the unaligned case, and even if it's not, I'm not sure
> we really care about perfs here.
>
>>
>> And the implementation would be:
>>
>> int mtd_buffer_is_filled_with_byte(void *buf, size_t size, u8 pattern)
>> {
>> int match;
>>
>> if (size == 0)
>> return 0;
>>
>> match = (*buf == pattern);
>>
>> return (size == 1) ? match : !memcmp(buf, buf + 1, size - 1)
>> }
>>
>
> Looks good to me. Feel free to post a patch ;-).
Grumble ... done.
--
Best regards,
Marek Vasut
More information about the linux-mtd
mailing list