[GIT PULL] omap changes for v2.6.39 merge window
Russell King - ARM Linux
linux at arm.linux.org.uk
Sun Apr 3 12:03:25 EDT 2011
On Sun, Apr 03, 2011 at 05:26:37PM +0200, Arnd Bergmann wrote:
> There are a few other examples that were done in a similar way:
> * The drivers/ide code still serves a few hardware platforms that
> never had anyone write a new libata code. Libata itself has
> been in a good shape for a long time though.
And there are platforms where libata just doesn't work but the old
ide driver does. My firewall with a CF card in is completely unusable
with libata, but works fine with the old IDE driver.
That's partly caused by there being no support for the ISA IOCS16 signal
on that hardware, so the ISA bus needs specific handling depending on
the behaviour of the ISA device being addressed. Yes, crap hardware,
nothing new there.
> We generally try to do gradual cleanups to any kernel code that is
> worth keeping, because as you say the duplication itself causes a
> lot of friction. For particularly hard cases, doing a replacement
> implementation is an exceptional way out. What we need to find a
> consensus on is how bad the problem in arch/arm/mach-*/ is:
> 1. No fundamental problem, just needs some care to clean up (your
> position, I guess), so we do do what we always do and keep doing
> gradual improvements, including treewide API changes.
> 2. Bad enough that starting a new competing implementation is easier
> because it lets us try different things more easily and reduce
> the number of treewide changes to all existing platforms.
> (this is where I think we are) Like IDE and OSS, the old code
> can still get improved and bug fixed, but concentrating on new
> code gives us better freedom to make progress more quickly.
> 3. In need of a complete replacement, like arch/ppc and a lot of
> drivers/staging. I'm not arguing that it's that bad.
Having just looked at the clocksource stuff, there's 9 up-counting 32-bit
clocksources which are relatively easy to port to a single piece of code.
There's a number of down-counting clocksources using various methods to
convert to an up-counting value - sometimes -readl(), sometimes
cs->mask - readl() and sometimes ~readl().
Then there's those which are either 16-bit or 32-bit, and some of those
16-bit implementations must use readw() to avoid bus faults.
Combining all those together you end up with something pretty disgusting,
and an initialization function taking 7 arguments (iomem pointer, name,
rating, tick rate, size, up/down counter, clocksource flags).
Does it lead to more maintainable code? I'm not convinced - while it
certainly reduces the amount of code, but the reduced code is rather
Would an alternative be to introduce separate mmio-32bit-up, mmio-32bit-down,
mmio-16bit-up and mmio-16bit-down clocksources? That also doesn't give
me a good feeling about the result.
Then there's those which change the cs->read function pointer at runtime,
and those which share that pointer with their sched_clock() implementation.
And those which need to do special things (like loop until read values are
stable) in their read functions which probably can't ever be supported by
More information about the linux-arm-kernel