Numonyx NOR and chip->mutex bug?

Michael Cashwell mboards at prograde.net
Sun Feb 6 16:13:40 EST 2011


On Feb 6, 2011, at 12:29 PM, Joakim Tjernlund wrote:

> Michael Cashwell <mboards at prograde.net> wrote on 2011/02/06 16:49:53:
> 
>> That's clearly what's happening in Stefan's trace when thread 465 writes 0xe8 and the next write is 0x70 by thread 209. Such a sequence is absolutely illegal (for the flash) and the latter thread is the problem. If we could get a stack trace for that map_write 0x70 I think we'd find the thread awoke and touched the chip without verifying the state first. The question is why.
> 
> Without my patch it is clear that you do end up with this problem. The first time one enter the for(;;) loop the driver reads out status from the chip before checking chip->state. This of course assumes that dropping the lock earlier may cause a schedule. So far Stefans tests indicates this to be true.

Yes, it was your patch and his log that lead me down that path!

>> One last idea.
>> 
>> The whole for(;;) loop in inval_cache_and_wait_for_operation() looks odd to me. Continuing with your idea of moving the chip->state while loop first, I see other problems. It seems to me that anywhere we drop and retake the chip mutex the very next thing needs to be the state check loop. Any break in holding that mutex means we must go back to the top and check state again.
>> 
>> I don't think the code as written does that. I have a completely reordered version of this function. (It didn't fix my issue but I think mine is something else.) On Monday I'll send that to you so you can consider it.
> 
> Yes, it is a bit odd. In addition to my patch one could move the erase suspend tests before the if(!timeo) test.

Precisely. I suspect you may well already have my reordered version. :-)

>>> Oh, one more thing, possibly one needs to add cpu_relax() or similar to force gcc to reload chip->state in the while loop?
>> 
>> I was also wondering about possible gcc optimization issues. I'm on 4.5.2 and that worked for me with the 2003 flash part. The same binaries fail with the 2008 parts, so, I don't know.
> 
> Very recent gcc, I am 3.4.6 but recently I began testing a little with 4.5.2. I do think I will wait for 4.5.3

I tried 4.5.1 but it failed for other reasons. I submitted bug reports to gnu and a fix appeared (finally) in 4.5.2. It's been good so far but I'm always mindful of that issue.

Staying current is a two edge sword. In general, later gccs have better code analysis and warnings which are valuable even if we ship using an older version.

>> Keep in mind that chip->state is not a hardware access. It's just another struct member. And I think that the rules are that any function call counts as a sequence point and gcc isn't allowed to think it knows what the result is and must reload it.
>> 
>> Lastly, consider the direction of the behavior. If we're in the state-check while loop then we got there because the two things were NOT equal. If an optimization error is causing a stale value to be compared wouldn't the observed behavior be that it remains not equal? (If it's not reloaded then how could it change?)
>> 
>> I'd expect an optimization error like that to get us stuck in the while loop, not exit from it prematurely.
> 
> Yes, all true. I wonder though if the mutex lock/unlock counts as a reload point? These are usually some inline asm. If not one could possibly argue that the first chip->state access, before entering the while body is using an old value.

Yes, how inlines interact with sequence points has never been entirely clear to me. Especially since the compiler is free to inline something I didn't tell it to and to ignore me telling to inline if it wants to.

I *think* the rules are semantic. If it's written (preprocessor aside) to look like a function call then it counts as a sequence point even if it ends up being inlined. But that's all quite beyond anything I can say for sure!

>> Makes me head hurt!
> 
> You are not alone :)

So collectively maybe we can make it hurt less. That's my theory, anyway, and I'm sticking to it.

-Mike




More information about the linux-mtd mailing list