[RFC PATCH v1 6/7] media: video: introduce face detection driver module
ming.lei at canonical.com
Mon Dec 12 23:01:39 EST 2011
On Mon, Dec 12, 2011 at 8:08 PM, HeungJun, Kim <riverful.kim at samsung.com> wrote:
> Hi Ming,
> It's maybe late, but I want to suggest one thing about FD API.
> This OMAP FD block looks detection ability of just face.
> But, It's possible to occur another device which can detect
> specific "object" or "patterns". Moreover, this API can expand
> "object recognition" area. So, I think it's good to change the API name
> like "v4l2_recog".
IMO, object detection is better, at least now OMAP4 and samsung has
face detection IP module, and face recognition is often done on results
of face detection and more complicated interfaces will be involved.
> Actually, I'm preparing similar control class for mainline with m5mols
> camera sensor driver. The m5mols camera sensor has the function about
> "face detection". But, I has experienced about Robot Recognition, and I
> remember the image processing chip which can detect spefic "pattern".
> So, I hesitated naming the API(control or ioctl whatever) with "face".
> It can be possible to provide just "object" or "pattern", not face.
> Even user library on windows, there is famous "OpenCV". And this is also
> support not only "face", but also "object".
Yes, object is better than face, and we can use enum flag to describe that
the objects detected are which kind of objects. In fact, I plan to rename the
face detection generic driver as object detection generic driver and let
hardware driver to handle the object detection details.
> The function of OMAP FDIF looks like m5mols ISP's one.
> please understand I don't have experience about OMAP AP. But, I can tell
> you it's better to use the name "object recognition", not the "face detection",
> for any other device or driver.
> In a few days, I'll share the CIDs I have thought for m5mols driver.
> And, I hope to discuss about this with OMAP FDIF.
You have been doing it already, :-)
> Thank you.
> Heungjun Kim
>> -----Original Message-----
>> From: linux-media-owner at vger.kernel.org [mailto:linux-media-
>> owner at vger.kernel.org] On Behalf Of Ming Lei
>> Sent: Monday, December 12, 2011 6:50 PM
>> To: Sylwester Nawrocki
>> Cc: linux-omap at vger.kernel.org; linux-arm-kernel at lists.infradead.org; linux-
>> kernel at vger.kernel.org; linux-media at vger.kernel.org
>> Subject: Re: [RFC PATCH v1 6/7] media: video: introduce face detection driver
>> On Mon, Dec 12, 2011 at 1:43 AM, Sylwester Nawrocki <snjw23 at gmail.com>
>> >> For OMAP4 FD, it is not needed to include FD into MC framework since a
>> >> intermediate buffer is always required. If your HW doesn't belong to this
>> >> case, what is the output of your HW FD in the link? Also sounds FD results
>> >> may not be needed at all for use space application in the case.
>> > The result data is similar to OMAP4 one, plus a few other attributes.
>> > User buffers may be filled by other than FD device driver.
>> >> Could you provide some practical use cases about these?
>> > As above, and any device with a camera that controls something and makes
>> > decision according to presence of human face in his view.
>> Sounds a reasonable case, :-)
>> >> If FD result is associated with a frame, how can user space get the frame
>> >> if no v4l2 buffer is involved? Without a frame sequence, it is a bit
>> >> difficult to retrieve FD results from user space.
>> > If you pass image data in memory buffers from user space, yes, it could be
>> > impossible.
>> It is easy to get the frame sequence from v4l2_buffer for the case too, :-)
>> > Not really, still v4l2_buffer may be used by other (sub)driver within same
>> > processing pipeline.
>> A related question: how can we make one application to support the two kinds
>> devices(input from user space data as OMAP4, input from SoC bus as Samsung)
>> at the same time? Maybe some capability info is to be exported to user space?
>> or other suggestions?
>> And will your Samsung FD HW support to detect faces from memory? or just only
>> detect from SoC bus?
>> > It will be included in the FD result... or in a dedicated v4l2 event data
>> > More important, at the end of the day, we'll be getting buffers with image
>> > at some stage of a video pipeline, which would contain same frame identifier
>> > (I think we can ignore v4l2_buffer.field for FD purpose).
>> OK, I will associate FD result with frame identifier, and not invent a
>> dedicated v4l2 event for query frame seq now until a specific requirement
>> for it is proposed.
>> I will convert/integrate recent discussions into patches of v2 for further
>> review, and sub device support will be provided. But before starting to do it,
>> I am still not clear how to integrate FD into MC framework. I understand FD
>> sub device is only a media entity, so how can FD sub device find the media
>> device(struct media_device)? or just needn't to care about it now?
>> Ming Lei
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to majordomo at vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> To unsubscribe from this list: send the line "unsubscribe linux-omap" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
More information about the linux-arm-kernel