An option that can be activated on a bufferpool to request gl texture upload meta on buffers from the pool.
When this option is enabled on the bufferpool,
GST_BUFFER_POOL_OPTION_VIDEO_META
should also be enabled.
An option that can be activated on bufferpool to request video metadata on buffers from the pool.
Name of the caps feature indicating that the stream is interlaced.
Currently it is only used for video with 'interlace-mode=alternate'
to ensure backwards compatibility for this new mode.
In this mode each buffer carries a single field of interlaced video.
GST_VIDEO_BUFFER_FLAG_TOP_FIELD
and GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD
indicate whether the buffer carries a top or bottom field. The order of
buffers/fields in the stream and the timestamps on the buffers indicate the
temporal order of the fields.
Top and bottom fields are expected to alternate in this mode.
The frame rate in the caps still signals the frame rate, so the notional field
rate will be twice the frame rate from the caps
(see GST_VIDEO_INFO_FIELD_RATE_N)
.
This metadata stays relevant as long as video colorspace is unchanged.
This metadata stays relevant as long as video orientation is unchanged.
This metadata stays relevant as long as video size is unchanged.
This metadata is relevant for video streams.
#GstVideoAlphaMode, the alpha mode to use. Default is #GST_VIDEO_ALPHA_MODE_COPY.
#G_TYPE_DOUBLE, the alpha color value to use. Default to 1.0
#G_TYPE_BOOLEAN, whether gst_video_converter_frame() will return immediately without waiting for the conversion to complete. A subsequent gst_video_converter_frame_finish() must be performed to ensure completion of the conversion before subsequent use. Default %FALSE
#G_TYPE_UINT, the border color to use if #GST_VIDEO_CONVERTER_OPT_FILL_BORDER is set to %TRUE. The color is in ARGB format. Default 0xff000000
#GstVideoChromaMode, set the chroma resample mode subsampled formats. Default is #GST_VIDEO_CHROMA_MODE_FULL.
#GstVideoChromaMethod, The resampler method to use for chroma resampling. Other options for the resampler can be used, see the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_LINEAR
#G_TYPE_INT, height in the destination frame, default destination height
#G_TYPE_INT, width in the destination frame, default destination width
#G_TYPE_INT, x position in the destination frame, default 0
#G_TYPE_INT, y position in the destination frame, default 0
#GstVideoDitherMethod, The dither method to use when changing bit depth. Default is #GST_VIDEO_DITHER_BAYER.
#G_TYPE_UINT, The quantization amount to dither to. Components will be quantized to multiples of this value. Default is 1
#G_TYPE_BOOLEAN, if the destination rectangle does not fill the complete destination image, render a border with #GST_VIDEO_CONVERTER_OPT_BORDER_ARGB. Otherwise the unusded pixels in the destination are untouched. Default %TRUE.
#GstVideoGammaMode, set the gamma mode. Default is #GST_VIDEO_GAMMA_MODE_NONE.
#GstVideoMatrixMode, set the color matrix conversion mode for converting between Y'PbPr and non-linear RGB (R'G'B'). Default is #GST_VIDEO_MATRIX_MODE_FULL.
#GstVideoPrimariesMode, set the primaries conversion mode. Default is #GST_VIDEO_PRIMARIES_MODE_NONE.
#GstVideoResamplerMethod, The resampler method to use for resampling. Other options for the resampler can be used, see the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_CUBIC
#G_TYPE_UINT, The number of taps for the resampler. Default is 0: let the resampler choose a good value.
#G_TYPE_INT, source height to convert, default source height
#G_TYPE_INT, source width to convert, default source width
#G_TYPE_INT, source x position to start conversion, default 0
#G_TYPE_INT, source y position to start conversion, default 0
#G_TYPE_UINT, maximum number of threads to use. Default 1, 0 for the number of cores.
Default maximum number of errors tolerated before signaling error.
The name of the templates for the sink pad.
The name of the templates for the source pad.
The name of the templates for the sink pad.
The name of the templates for the source pad.
List of all video formats, for use in template caps strings.
Formats are sorted by decreasing "quality", using these criteria by priority:
G_TYPE_DOUBLE, B parameter of the cubic filter. The B parameter controls the bluriness. Values between 0.0 and 2.0 are accepted. 1/3 is the default.
Below are some values of popular filters: B C Hermite 0.0 0.0 Spline 1.0 0.0 Catmull-Rom 0.0 1/2 Mitchell 1/3 1/3 Robidoux 0.3782 0.3109 Robidoux Sharp 0.2620 0.3690 Robidoux Soft 0.6796 0.1602
G_TYPE_DOUBLE, C parameter of the cubic filter. The C parameter controls the Keys alpha value. Values between 0.0 and 2.0 are accepted. 1/3 is the default.
See #GST_VIDEO_RESAMPLER_OPT_CUBIC_B for some more common values
G_TYPE_DOUBLE, specifies the size of filter envelope for
GST_VIDEO_RESAMPLER_METHOD_LANCZOS
. values are clamped between
1.0 and 5.0. 2.0 is the default.
G_TYPE_INT, limits the maximum number of taps to use. 16 is the default.
G_TYPE_DOUBLE, specifies sharpening of the filter for
GST_VIDEO_RESAMPLER_METHOD_LANCZOS
. values are clamped between
0.0 and 1.0. 0.0 is the default.
G_TYPE_DOUBLE, specifies sharpness of the filter for
GST_VIDEO_RESAMPLER_METHOD_LANCZOS
. values are clamped between
0.5 and 1.5. 1.0 is the default.
#GstVideoDitherMethod, The dither method to use for propagating quatization errors.
Attaches #GstVideoAFDMeta metadata to buffer
with the given
parameters.
a #GstBuffer
0 for progressive or field 1 and 1 for field 2
#GstVideoAFDSpec that applies to AFD value
#GstVideoAFDValue AFD enumeration
Attaches GstVideoAffineTransformationMeta metadata to buffer
with
the given parameters.
Attaches #GstVideoBarMeta metadata to buffer
with the given
parameters.
a #GstBuffer
0 for progressive or field 1 and 1 for field 2
if true then bar data specifies letterbox, otherwise pillarbox
If is_letterbox
is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame
If is_letterbox
is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.
Attaches #GstVideoCaptionMeta metadata to buffer
with the given
parameters.
a #GstBuffer
The type of Closed Caption to add
The Closed Caption data
Attaches a #GstVideoCodecAlphaMeta metadata to buffer
with
the given alpha buffer.
Attaches GstVideoGLTextureUploadMeta metadata to buffer
with the given
parameters.
a #GstBuffer
the #GstVideoGLTextureOrientation
the number of textures
array of #GstVideoGLTextureType
the function to upload the buffer to a specific texture ID
function to copy user_data
function to free user_data
Attaches GstVideoMeta metadata to buffer
with the given parameters and the
default offsets and strides for format
and width
x height
.
This function calculates the default offsets and strides and then calls gst_buffer_add_video_meta_full() with them.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
Attaches GstVideoMeta metadata to buffer
with the given parameters.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
number of planes
offset of each plane
stride of each plane
Sets an overlay composition on a buffer. The buffer will obtain its own
reference to the composition, meaning this function does not take ownership
of comp
.
a #GstBuffer
a #GstVideoOverlayComposition
Attaches #GstVideoRegionOfInterestMeta metadata to buffer
with the given
parameters.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoRegionOfInterestMeta metadata to buffer
with the given
parameters.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoTimeCodeMeta metadata to buffer
with the given
parameters.
a #GstBuffer
a #GstVideoTimeCode
Attaches #GstVideoTimeCodeMeta metadata to buffer
with the given
parameters.
a #GstBuffer
framerate numerator
framerate denominator
a #GDateTime for the latest daily jam
a #GstVideoTimeCodeFlags
hours since the daily jam
minutes since the daily jam
seconds since the daily jam
frames since the daily jam
fields since the daily jam
Find the #GstVideoRegionOfInterestMeta on buffer
with the given id
.
Buffers can contain multiple #GstVideoRegionOfInterestMeta metadata items if multiple regions of interests are marked on a frame.
Get the video alignment from the bufferpool configuration config
in
in align
a #GstStructure
a #GstVideoAlignment
Set the video alignment in align
to the bufferpool configuration
config
a #GstStructure
a #GstVideoAlignment
Inspect a #GstEvent and return the #GstNavigationEventType of the event, or #GST_NAVIGATION_EVENT_INVALID if the event is not a #GstNavigation event.
Create a new navigation event given navigation command..
The navigation command to use.
Create a new navigation event for the given key press.
A string identifying the key press.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for the given key release.
A string identifying the released key.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for the given key mouse button press.
The number of the pressed mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for the given key mouse button release.
The number of the released mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for the new mouse location.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for the mouse scroll.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
The x component of the scroll movement.
The y component of the scroll movement.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event signalling that all currently active touch points are cancelled and should be discarded. For example, under Wayland this event might be sent when a swipe passes the threshold to be recognized as a gesture by the compositor.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for an added touch point.
A number uniquely identifying this touch point. It must stay unique to this touch point at least until an up event is sent for the same identifier, or all touch points are cancelled.
The x coordinate of the new touch point.
The y coordinate of the new touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no data is available.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event signalling the end of a touch frame. Touch frames signal that all previous down, motion and up events not followed by another touch frame event already should be considered simultaneous.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for a moved touch point.
A number uniquely identifying this touch point. It must correlate to exactly one previous touch_start event.
The x coordinate of the touch point.
The y coordinate of the touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no data is available.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Create a new navigation event for a removed touch point.
A number uniquely identifying this touch point. It must correlate to exactly one previous down event, but can be reused after sending this event.
The x coordinate of the touch point.
The y coordinate of the touch point.
a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).
Inspect a #GstNavigation command event and retrieve the enum value of the associated command.
Retrieve the details of either a #GstNavigation mouse button press event or a mouse button release event. Determine which type the event is using gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.
Check a bus message to see if it is a #GstNavigation event, and return the #GstNavigationMessageType identifying the type of the message if so.
Creates a new #GstNavigation message with type #GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application that the current angle, or current number of angles available in a multiangle video has changed.
A #GstObject to set as source of the new message.
The currently selected angle.
The number of viewing angles now available.
Inspect a #GstQuery and return the #GstNavigationQueryType associated with it if it is a #GstNavigation query.
Parse the #GstNavigation command query and retrieve the nth
command from
it into cmd
. If the list contains less elements than nth,
cmd
will be
set to #GST_NAVIGATION_COMMAND_INVALID.
Set the #GstNavigation command query result fields in query
. The number
of commands passed must be equal to n_commands
.
a #GstQuery
An array containing n_cmds
GstNavigationCommand
values.
Lets you blend the src
image into the dest
image
The #GstVideoFrame where to blend src
in
the #GstVideoFrame that we want to blend into
The x offset in pixel where the src
image should be blended
the y offset in pixel where the src
image should be blended
the global_alpha each per-pixel alpha value is multiplied with
Scales a buffer containing RGBA (or AYUV) video. This is an internal helper function which is used to scale subtitle overlays, and may be deprecated in the near future. Use #GstVideoScaler to scale video buffers instead.
the #GstVideoInfo describing the video data in src_buffer
the source buffer containing video pixels to scale
the height in pixels to scale the video data in src_buffer
to
the width in pixels to scale the video data in src_buffer
to
Given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.
Width of the video frame in pixels
Height of the video frame in pixels
Numerator of the pixel aspect ratio of the input video.
Denominator of the pixel aspect ratio of the input video.
Numerator of the pixel aspect ratio of the display device
Denominator of the pixel aspect ratio of the display device
Parses fixed Closed Caption #GstCaps and returns the corresponding caption type, or %GST_VIDEO_CAPTION_TYPE_UNKNOWN.
Creates new caps corresponding to type
.
#GstVideoCaptionType
Takes src
rectangle and position it at the center of dst
rectangle with or
without scaling
. It handles clipping if the src
rectangle is bigger than
the dst
one and scaling
is set to FALSE.
a pointer to #GstVideoRectangle describing the source area
a pointer to #GstVideoRectangle describing the destination area
a #gboolean indicating if scaling should be applied or not
Convert s
to a #GstVideoChromaSite
a chromasite string
Perform resampling of width
chroma pixels in lines
.
a #GstVideoChromaResample
pixel lines
the number of pixels on one line
Convert s
to a #GstVideoChromaSite
a chromasite string
Converts site
to its string representation.
a #GstVideoChromaSite
Converts site
to its string representation.
a #GstVideoChromaSite
Converts the value
to the #GstVideoColorMatrix
The matrix coefficients (MatrixCoefficients) value is
defined by "ISO/IEC 23001-8 Section 7.3 Table 4"
and "ITU-T H.273 Table 4".
"H.264 Table E-5" and "H.265 Table E.5" share the identical values.
a ITU-T H.273 matrix coefficients value
Get the coefficients used to convert between Y'PbPr and R'G'B' using matrix
.
When:
|[ 0.0 <= [Y',R',G',B'] <= 1.0) (-0.5 <= [Pb,Pr] <= 0.5)
the general conversion is given by:
|[
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))
and the other way around:
|[ R' = Y' + Cr2(1-Kr) G' = Y' - Cb2(1-Kb)Kb/(1-Kr-Kb) - Cr2*(1-Kr)Kr/(1-Kr-Kb) B' = Y' + Cb2*(1-Kb)
@param matrix a #GstVideoColorMatrix
Converts #GstVideoColorMatrix to the "matrix coefficients" (MatrixCoefficients) value defined by "ISO/IEC 23001-8 Section 7.3 Table 4" and "ITU-T H.273 Table 4". "H.264 Table E-5" and "H.265 Table E.5" share the identical values.
a #GstVideoColorMatrix
Converts the value
to the #GstVideoColorPrimaries
The colour primaries (ColourPrimaries) value is
defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2".
"H.264 Table E-3" and "H.265 Table E.3" share the identical values.
a ITU-T H.273 colour primaries value
Get information about the chromaticity coordinates of primaries
.
a #GstVideoColorPrimaries
Converts #GstVideoColorPrimaries to the "colour primaries" (ColourPrimaries) value defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2". "H.264 Table E-3" and "H.265 Table E.3" share the identical values.
a #GstVideoColorPrimaries
Compute the offset and scale values for each component of info
. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
info
and range
.
a #GstVideoColorRange
a #GstVideoFormatInfo
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
callback
will be called after conversion, when an error occurred or if conversion didn't
finish after timeout
. callback
will always be called from the thread default
%GMainContext, see g_main_context_get_thread_default(). If GLib before 2.22 is used,
this will always be the global default main context.
destroy_notify
will be called after the callback was called and user_data
is not needed
anymore.
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
%GstVideoConvertSampleCallback that will be called after conversion.
Creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().
the timestamp of the buffer that starts a new key unit
the stream_time of the buffer that starts a new key unit
the running_time of the buffer that starts a new key unit
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Creates a new Still Frame event. If in_still
is %TRUE, then the event
represents the start of a still frame sequence. If it is %FALSE, then
the event ends a still frame sequence.
To parse an event created by gst_video_event_new_still_frame() use gst_video_event_parse_still_frame().
boolean value for the still-frame state of the event.
Creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.
running_time
can be set to request a new key unit at a specific
running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a
new key unit as soon as possible.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().
the running_time at which a new key unit should be produced
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Get timestamp, stream-time, running-time, all-headers and count in the force key unit event. See gst_video_event_new_downstream_force_key_unit() for a full description of the downstream force key unit event.
running_time
will be adjusted for any pad offsets of pads it was passing through.
Parse a #GstEvent, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.
Create a still frame event using gst_video_event_new_still_frame()
Get running-time, all-headers and count in the force key unit event. See gst_video_event_new_upstream_force_key_unit() for a full description of the upstream force key unit event.
Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()
running_time
will be adjusted for any pad offsets of pads it was passing through.
Convert order
to a #GstVideoFieldOrder
a field order
Convert order
to its string representation.
a #GstVideoFieldOrder
Converts a FOURCC value into the corresponding #GstVideoFormat. If the FOURCC cannot be represented by #GstVideoFormat, #GST_VIDEO_FORMAT_UNKNOWN is returned.
a FOURCC value representing raw YUV video
Find the #GstVideoFormat for the given parameters.
the amount of bits used for a pixel
the amount of bits used to store a pixel. This value is bigger than depth
the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN
the red mask
the green mask
the blue mask
the alpha mask, or 0 if no alpha mask
Convert the format
string to its #GstVideoFormat.
a format string
Get the #GstVideoFormatInfo for format
a #GstVideoFormat
Get the default palette of format
. This the palette used in the pack
function for paletted formats.
a #GstVideoFormat
Converts a #GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If format
has
no corresponding FOURCC value, 0 is returned.
a #GstVideoFormat video format
Returns a string containing a descriptive name for the #GstVideoFormat if there is one, or NULL otherwise.
a #GstVideoFormat video format
Return all the raw video formats supported by GStreamer.
Use info
and buffer
to fill in the values of frame
. frame
is usually
allocated on the stack, and you will pass the address to the #GstVideoFrame
structure allocated on the stack; gst_video_frame_map() will then fill in
the structures with the various video-specific information you need to access
the pixels of the video buffer. You can then use accessor macros such as
GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(),
GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc.
to get to the pixels.
GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);
for (h = 0; h < height; ++h) {
for (w = 0; w < width; ++w) {
guint8 *pixel = pixels + h * stride + w * pixel_stride;
memset (pixel, 0, pixel_stride);
}
}
gst_video_frame_unmap (&vframe);
}
...
All video planes of buffer
will be mapped and the pointers will be set in
frame->
data.
The purpose of this function is to make it easy for you to get to the video pixels in a generic way, without you having to worry too much about details such as whether the video data is allocated in one contiguous memory chunk or multiple memory chunks (e.g. one for each plane); or if custom strides and custom plane offsets are used or not (as signalled by GstVideoMeta on each buffer). This function will just fill the #GstVideoFrame structure with the right values and if you use the accessor macros everything will just work and you can access the data easily. It also maps the underlying memory chunks for you.
a #GstVideoInfo
the buffer to map
#GstMapFlags
Use info
and buffer
to fill in the values of frame
with the video frame
information of frame id
.
When id
is -1, the default frame is mapped. When id
!= -1, this function
will return %FALSE when there is no GstVideoMeta with that id.
All video planes of buffer
will be mapped and the pointers will be set in
frame->
data.
a #GstVideoInfo
the buffer to map
the frame id to map
#GstMapFlags
Given the nominal duration of one video frame, this function will check some standard framerates for a close match (within 0.1%) and return one if possible,
It will calculate an arbitrary framerate if no close match was found, and return %FALSE.
It returns %FALSE if a duration of 0 is passed.
Nominal duration of one frame
Initialize info
with default values.
Convert mode
to a #GstVideoInterlaceMode
a mode
Convert mode
to its string representation.
a #GstVideoInterlaceMode
Return a generic raw video caps for formats defined in formats
.
If formats
is %NULL returns a caps for all the supported raw video formats,
see gst_video_formats_raw().
an array of raw #GstVideoFormat, or %NULL
Return a generic raw video caps for formats defined in formats
with features
features
.
If formats
is %NULL returns a caps for all the supported video formats,
see gst_video_formats_raw().
an array of raw #GstVideoFormat, or %NULL
the #GstCapsFeatures to set on the caps
Extract #GstVideoMasteringDisplayInfo from mastering
a #GstStructure representing #GstVideoMasteringDisplayInfo
Get the #GQuark for the "gst-video-scale" metadata transform operation.
Utility function that transforms the width/height/PAR and multiview mode and flags of a #GstVideoInfo into the requested mode.
A #GstVideoInfo structure to operate on
A #GstVideoMultiviewMode value
A set of #GstVideoMultiviewFlags
Parses the "image-orientation" tag and transforms it into the #GstVideoOrientationMethod enum.
This helper shall be used by classes implementing the #GstVideoOverlay interface that want the render rectangle to be controllable using properties. This helper will install "render-rectangle" property into the class.
The class on which the properties will be installed
The first free property ID to use
This helper shall be used by classes implementing the #GstVideoOverlay interface that want the render rectangle to be controllable using properties. This helper will parse and set the render rectangle calling gst_video_overlay_set_render_rectangle().
The instance on which the property is set
The highest property ID.
The property ID
The #GValue to be set
Get the tile index of the tile at coordinates x
and y
in the tiled
image of x_tiles
by y_tiles
.
Use this method when mode
is of type %GST_VIDEO_TILE_TYPE_INDEXED.
a #GstVideoTileMode
x coordinate
y coordinate
number of horizintal tiles
number of vertical tiles
Convert val
to its gamma decoded value. This is the inverse operation of
gst_video_color_transfer_encode().
For a non-linear value L' in the range [0..1], conversion to the linear L is in general performed with a power function like:
|[ L = L' ^ gamma
Depending on `func,` different formulas might be applied. Some formulas
encode a linear segment in the lower range.
@param func a #GstVideoTransferFunction
@param val a value
Convert val
to its gamma encoded value.
For a linear value L in the range [0..1], conversion to the non-linear (gamma encoded) L' is in general performed with a power function like:
|[ L' = L ^ (1 / gamma)
Depending on `func,` different formulas might be applied. Some formulas
encode a linear segment in the lower range.
@param func a #GstVideoTransferFunction
@param val a value
Converts the value
to the #GstVideoTransferFunction
The transfer characteristics (TransferCharacteristics) value is
defined by "ISO/IEC 23001-8 Section 7.2 Table 3"
and "ITU-T H.273 Table 3".
"H.264 Table E-4" and "H.265 Table E.4" share the identical values.
a ITU-T H.273 transfer characteristics value
Returns whether from_func
and to_func
are equivalent. There are cases
(e.g. BT601, BT709, and BT2020_10) where several functions are functionally
identical. In these cases, when doing conversion, we should consider them
as equivalent. Also, BT2020_12 is the same as the aforementioned three for
less than 12 bits per pixel.
#GstVideoTransferFunction to convert from
bits per pixel to convert from
#GstVideoTransferFunction to convert into
bits per pixel to convert into
Converts #GstVideoTransferFunction to the "transfer characteristics" (TransferCharacteristics) value defined by "ISO/IEC 23001-8 Section 7.2 Table 3" and "ITU-T H.273 Table 3". "H.264 Table E-4" and "H.265 Table E.4" share the identical values.
a #GstVideoTransferFunction
A bufferpool option to enable extra padding. When a bufferpool supports this option, gst_buffer_pool_config_set_video_alignment() can be called.
When this option is enabled on the bufferpool, #GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.