[PATCH 2/7] MFD: add STM32 DFSDM support

Lars-Peter Clausen lars at metafoo.de
Sun Jan 29 06:34:55 PST 2017


On 01/29/2017 03:19 PM, Lars-Peter Clausen wrote:
> On 01/29/2017 01:28 PM, Jonathan Cameron wrote:
> [...]
>>>> Jonathan, Mark, Please could you share your opinion on this topic?
>> Hmm - based on a fairly quick read through of the code (which is never
>> ideal!). I can see that the ideal would indeed be as Lee says, to
>> expand the IIO interfaces sufficiently to support what you need.
>>
>>
>> So, reading the code (fairly quickly I'm afraid as had a lot of reviews
>> to catch up on this weekend).
>> What we need:
>> 1) DMA support in the ADC driver.  This would be a good anyway!
>> 2) DMA consumer support - I defer to Lars for comments on this.
>> 3) Means of describing and controlling the sinc filters applied. 
>> 4) Appropriate channel support.  I'm not convinced that it doesn't make
>> sense to have IIO channels for the microphones - at least in a streaming
>> mode.  It's data - I don't really care what ;)
>> Coarsely it's a filtered pulse per period counter which is
>> a perfectly valid type to have a channel for.
>>
>> The big question to my mind is the DMA consumer support. How would
>> it work. It it wouldn't this is somewhat of a non starter.
>>
>> To bring up another slightly ugly MFD case where it is borderline
>> on whether an MFD makes sense (just as a reference point of something
>> we have discussed a few times before)
>>
>> ADCs with features directed at touchscreen support.
>> These are odd as the ADC bit is generic, but the specific output
>> and read sequences used for touchscreen reading don't correspond to
>> anything that makes any real sense for other applications.
>>
>> We have started to get hybrid drives that have an MFD underneath but
>> do the ADC reads through IIO consumer interfaces, and the timing
>> control from a touchscreen driver.  We haven't really gotten this
>> one right yet either.
>>
>> Here however, to my mind things are different - as I read it
>> (and feel free to point out what I'm missing), the sound usecase
>> is just a question of setting up sampling frequencies and filters
>> appropriate to the microphones and what ASoC expects?
>>
>> That's not to say the IIO dma stuff is flexible enough (yet) to
>> handle the data flows, but perhaps we can work towards that.
> 
> Yeah, so this is a bit different, but not unexpected. And I'm sure we'll see
> more similar hardware in the future. I've talked about this before[1], the
> cost structure of creating and manufacturing new hardware drives the design
> in a certain direction so that we end up with general purpose hardware that
> suddenly has applications in multiple frameworks that were previously fully
> orthogonal.
> 
> This device is certainly not a multi-function-device. It only has one
> function, it's a sigma-delta demodulator. It is rather a
> multi-purpose-device. It can be used for sigma-delta demodulation in audio
> applications as well as more specialized data capture applications.
> 
> It's comparable to something like a GPIO that can be used to control a reset
> pin or turn on and off a LED. The GPIO chip is not considered
> multi-function-device though, even though it can be used for many different
> applications.
> 
> As for DMA we already have a lot of DMA infrastructure on the audio side and
> we probably want to reuse that rather than inserting IIO as a middle layer
> since audio buffer capture as different requirements from IIO buffer and
> we'd have to go the route of the least common denominator and loose
> expressibility in the process.
> 
> I've created a IIO buffer[2] that does not capture data to memory but is
> only used to enable/disable the data capture process. We use this in setups
> where the data is passed from the converter to a application specific
> processing chain without ever going through system memory. This buffer could
> probably also be used here on the audio side to control the converter state.

I forgot to mention. I think the first thing we should do is work on
terminology. This is not an ADC, this is a configurable low-pass-filter.

It works in conjunction with a analog frontend (ADC) that produces a 1-bit
pulse-density-modulated stream, takes that stream and converts it into N-bit
PCM samples. The PCM samples are generated at a fraction of the PDM stream
samplerate that corresponds to the decimation factor.

This is not an unusual device. Many audio CODEC and audio controllers
contain such a core as well as most SigmaDelta converters supported by IIO.
What is special about this part is that it is a dedicated core that is not
embedded in some other hardware component. This creates greater flexibility,
but of course also greater complexity that is required to manage all that
flexibility.

We shouldn't codify anything about the kernel internal frameworks through
which the device might be exposed into the devicetree. We should accurately
describe the hardware (including the analog frontend) and then create a
appropriate software structures to handle them.	




More information about the linux-arm-kernel mailing list