[PATCH v4 0/2] Enumerate all pixels formats
Jonas Karlman
jonas at kwiboo.se
Fri Jul 19 14:59:16 PDT 2024
Hi,
On 2024-07-19 17:36, Nicolas Dufresne wrote:
> Hi,
>
> Le vendredi 19 juillet 2024 à 15:47 +0200, Benjamin Gaignard a écrit :
>>> What exactly is the problem you want to solve? A real-life problem, not a theoretical
>>> one :-)
>>
>> On real-life: on a board with 2 different stateless decoders being able to detect the
>> one which can decode 10 bits bitstreams without testing all codec-dependent controls.
>
> That leans toward giving an answer for the selected bitstream format though,
> since the same driver may do 10bit HEVC without 10bit AV1.
>
> For the use case, both Chromium and GStreamer have a need to categorized
> decoders so that we avoid trying to use decoder that can't do that task. More
> platforms are getting multiple decoders, and we also need to take into account
> the available software decoders.
>
> Just looking at the codec specific profile is insufficient since we need two
> conditions to be met.
>
> 1. The driver must support 10bit for the specific CODEC (for most codec this is
> profile visible)
> 2. The produced 10bit color format must be supported by userspace
>
> In today's implementation, in order to test this, we'd need to simulate a 10bit
> header control, so that when enumerating the formats we get a list of 10bit
> (optionally 8bit too, since some decoder can downscale colors) and finally
> verify that these pixel formats are know by userspace. This is not impossible,
> but very tedious, this proposal we to try and make this easier.
I have also been wondering what the use-case of this would be, and if it
is something to consider before a FFmpeg v4l2-request hwaccel submission.
I am guessing GStreamer may need to decide what decoder to use prior to
bitstream parsing/decoding has started?
For my re-worked FFmpeg v4l2-request hwaccel series, should hit
ffmpeg-devel list any day now, we try to probe each video device one
by one trying to identify if it will be capable to decode current stream
into a known/supported capture format [1], this typically happen when
header for first slice/frame has been parsed and is used to let driver
select its preferred/optimal capture format. The first device where all
test passes will be used and if none works FFmpeg video players will
typically fallback to use software decoding.
This type of probing may be a little bit limiting and depend too heavy
on the M2M Stateless Video Decoder Interface "It is suggested that the
driver chooses the preferred/optimal format for the current
configuration.".
Would you suggest I change how this probing is happening to make some
more clever detection of what media+video device should be used for a
specific stream with help of this new flag?
[1] https://github.com/Kwiboo/FFmpeg/blob/v4l2request-2024-v2/libavcodec/v4l2_request_probe.c#L373-L424
Regards,
Jonas
>
> Nicolas
>
>
More information about the Linux-rockchip
mailing list