Gjsify LogoGjsify Logo

Index

Enumerations

Classes

Interfaces

Variables

Functions

Variables

BUFFER_POOL_OPTION_VIDEO_AFFINE_TRANSFORMATION_META: string
BUFFER_POOL_OPTION_VIDEO_ALIGNMENT: string

A bufferpool option to enable extra padding. When a bufferpool supports this option, gst_buffer_pool_config_set_video_alignment() can be called.

When this option is enabled on the bufferpool, #GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.

BUFFER_POOL_OPTION_VIDEO_GL_TEXTURE_UPLOAD_META: string

An option that can be activated on a bufferpool to request gl texture upload meta on buffers from the pool.

When this option is enabled on the bufferpool, GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.

BUFFER_POOL_OPTION_VIDEO_META: string

An option that can be activated on bufferpool to request video metadata on buffers from the pool.

CAPS_FEATURE_FORMAT_INTERLACED: string

Name of the caps feature indicating that the stream is interlaced.

Currently it is only used for video with 'interlace-mode=alternate' to ensure backwards compatibility for this new mode. In this mode each buffer carries a single field of interlaced video. GST_VIDEO_BUFFER_FLAG_TOP_FIELD and GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD indicate whether the buffer carries a top or bottom field. The order of buffers/fields in the stream and the timestamps on the buffers indicate the temporal order of the fields. Top and bottom fields are expected to alternate in this mode. The frame rate in the caps still signals the frame rate, so the notional field rate will be twice the frame rate from the caps (see GST_VIDEO_INFO_FIELD_RATE_N).

CAPS_FEATURE_META_GST_VIDEO_AFFINE_TRANSFORMATION_META: string
CAPS_FEATURE_META_GST_VIDEO_GL_TEXTURE_UPLOAD_META: string
CAPS_FEATURE_META_GST_VIDEO_META: string
CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION: string
META_TAG_VIDEO_COLORSPACE_STR: string

This metadata stays relevant as long as video colorspace is unchanged.

META_TAG_VIDEO_ORIENTATION_STR: string

This metadata stays relevant as long as video orientation is unchanged.

META_TAG_VIDEO_SIZE_STR: string

This metadata stays relevant as long as video size is unchanged.

META_TAG_VIDEO_STR: string

This metadata is relevant for video streams.

VIDEO_COLORIMETRY_BT2020: string
VIDEO_COLORIMETRY_BT2020_10: string
VIDEO_COLORIMETRY_BT2100_HLG: string
VIDEO_COLORIMETRY_BT2100_PQ: string
VIDEO_COLORIMETRY_BT601: string
VIDEO_COLORIMETRY_BT709: string
VIDEO_COLORIMETRY_SMPTE240M: string
VIDEO_COLORIMETRY_SRGB: string
VIDEO_COMP_A: number
VIDEO_COMP_B: number
VIDEO_COMP_G: number
VIDEO_COMP_INDEX: number
VIDEO_COMP_PALETTE: number
VIDEO_COMP_R: number
VIDEO_COMP_U: number
VIDEO_COMP_V: number
VIDEO_COMP_Y: number
VIDEO_CONVERTER_OPT_ALPHA_MODE: string

#GstVideoAlphaMode, the alpha mode to use. Default is #GST_VIDEO_ALPHA_MODE_COPY.

VIDEO_CONVERTER_OPT_ALPHA_VALUE: string

#G_TYPE_DOUBLE, the alpha color value to use. Default to 1.0

VIDEO_CONVERTER_OPT_ASYNC_TASKS: string

#G_TYPE_BOOLEAN, whether gst_video_converter_frame() will return immediately without waiting for the conversion to complete. A subsequent gst_video_converter_frame_finish() must be performed to ensure completion of the conversion before subsequent use. Default %FALSE

VIDEO_CONVERTER_OPT_BORDER_ARGB: string

#G_TYPE_UINT, the border color to use if #GST_VIDEO_CONVERTER_OPT_FILL_BORDER is set to %TRUE. The color is in ARGB format. Default 0xff000000

VIDEO_CONVERTER_OPT_CHROMA_MODE: string

#GstVideoChromaMode, set the chroma resample mode subsampled formats. Default is #GST_VIDEO_CHROMA_MODE_FULL.

VIDEO_CONVERTER_OPT_CHROMA_RESAMPLER_METHOD: string

#GstVideoChromaMethod, The resampler method to use for chroma resampling. Other options for the resampler can be used, see the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_LINEAR

VIDEO_CONVERTER_OPT_DEST_HEIGHT: string

#G_TYPE_INT, height in the destination frame, default destination height

VIDEO_CONVERTER_OPT_DEST_WIDTH: string

#G_TYPE_INT, width in the destination frame, default destination width

VIDEO_CONVERTER_OPT_DEST_X: string

#G_TYPE_INT, x position in the destination frame, default 0

VIDEO_CONVERTER_OPT_DEST_Y: string

#G_TYPE_INT, y position in the destination frame, default 0

VIDEO_CONVERTER_OPT_DITHER_METHOD: string

#GstVideoDitherMethod, The dither method to use when changing bit depth. Default is #GST_VIDEO_DITHER_BAYER.

VIDEO_CONVERTER_OPT_DITHER_QUANTIZATION: string

#G_TYPE_UINT, The quantization amount to dither to. Components will be quantized to multiples of this value. Default is 1

VIDEO_CONVERTER_OPT_FILL_BORDER: string

#G_TYPE_BOOLEAN, if the destination rectangle does not fill the complete destination image, render a border with #GST_VIDEO_CONVERTER_OPT_BORDER_ARGB. Otherwise the unusded pixels in the destination are untouched. Default %TRUE.

VIDEO_CONVERTER_OPT_GAMMA_MODE: string

#GstVideoGammaMode, set the gamma mode. Default is #GST_VIDEO_GAMMA_MODE_NONE.

VIDEO_CONVERTER_OPT_MATRIX_MODE: string

#GstVideoMatrixMode, set the color matrix conversion mode for converting between Y'PbPr and non-linear RGB (R'G'B'). Default is #GST_VIDEO_MATRIX_MODE_FULL.

VIDEO_CONVERTER_OPT_PRIMARIES_MODE: string

#GstVideoPrimariesMode, set the primaries conversion mode. Default is #GST_VIDEO_PRIMARIES_MODE_NONE.

VIDEO_CONVERTER_OPT_RESAMPLER_METHOD: string

#GstVideoResamplerMethod, The resampler method to use for resampling. Other options for the resampler can be used, see the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_CUBIC

VIDEO_CONVERTER_OPT_RESAMPLER_TAPS: string

#G_TYPE_UINT, The number of taps for the resampler. Default is 0: let the resampler choose a good value.

VIDEO_CONVERTER_OPT_SRC_HEIGHT: string

#G_TYPE_INT, source height to convert, default source height

VIDEO_CONVERTER_OPT_SRC_WIDTH: string

#G_TYPE_INT, source width to convert, default source width

VIDEO_CONVERTER_OPT_SRC_X: string

#G_TYPE_INT, source x position to start conversion, default 0

VIDEO_CONVERTER_OPT_SRC_Y: string

#G_TYPE_INT, source y position to start conversion, default 0

VIDEO_CONVERTER_OPT_THREADS: string

#G_TYPE_UINT, maximum number of threads to use. Default 1, 0 for the number of cores.

VIDEO_DECODER_MAX_ERRORS: number

Default maximum number of errors tolerated before signaling error.

VIDEO_DECODER_SINK_NAME: string

The name of the templates for the sink pad.

VIDEO_DECODER_SRC_NAME: string

The name of the templates for the source pad.

VIDEO_ENCODER_SINK_NAME: string

The name of the templates for the sink pad.

VIDEO_ENCODER_SRC_NAME: string

The name of the templates for the source pad.

VIDEO_FORMATS_ALL: string

List of all video formats, for use in template caps strings.

Formats are sorted by decreasing "quality", using these criteria by priority:

  • number of components
  • depth
  • subsampling factor of the width
  • subsampling factor of the height
  • number of planes
  • native endianness preferred
  • pixel stride
  • poffset
  • prefer non-complex formats
  • prefer YUV formats over RGB ones
  • prefer I420 over YV12
  • format name
VIDEO_FPS_RANGE: string
VIDEO_MAX_COMPONENTS: number
VIDEO_MAX_PLANES: number
VIDEO_RESAMPLER_OPT_CUBIC_B: string

G_TYPE_DOUBLE, B parameter of the cubic filter. The B parameter controls the bluriness. Values between 0.0 and 2.0 are accepted. 1/3 is the default.

Below are some values of popular filters: B C Hermite 0.0 0.0 Spline 1.0 0.0 Catmull-Rom 0.0 1/2 Mitchell 1/3 1/3 Robidoux 0.3782 0.3109 Robidoux Sharp 0.2620 0.3690 Robidoux Soft 0.6796 0.1602

VIDEO_RESAMPLER_OPT_CUBIC_C: string

G_TYPE_DOUBLE, C parameter of the cubic filter. The C parameter controls the Keys alpha value. Values between 0.0 and 2.0 are accepted. 1/3 is the default.

See #GST_VIDEO_RESAMPLER_OPT_CUBIC_B for some more common values

VIDEO_RESAMPLER_OPT_ENVELOPE: string

G_TYPE_DOUBLE, specifies the size of filter envelope for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 1.0 and 5.0. 2.0 is the default.

VIDEO_RESAMPLER_OPT_MAX_TAPS: string

G_TYPE_INT, limits the maximum number of taps to use. 16 is the default.

VIDEO_RESAMPLER_OPT_SHARPEN: string

G_TYPE_DOUBLE, specifies sharpening of the filter for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 0.0 and 1.0. 0.0 is the default.

VIDEO_RESAMPLER_OPT_SHARPNESS: string

G_TYPE_DOUBLE, specifies sharpness of the filter for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 0.5 and 1.5. 1.0 is the default.

VIDEO_SCALER_OPT_DITHER_METHOD: string

#GstVideoDitherMethod, The dither method to use for propagating quatization errors.

VIDEO_SIZE_RANGE: string
VIDEO_TILE_TYPE_MASK: number
VIDEO_TILE_TYPE_SHIFT: number
VIDEO_TILE_X_TILES_MASK: number
VIDEO_TILE_Y_TILES_SHIFT: number

Functions

  • buffer_add_video_bar_meta(buffer: Gst.Buffer, field: number, is_letterbox: boolean, bar_data1: number, bar_data2: number): VideoBarMeta
  • Attaches #GstVideoBarMeta metadata to buffer with the given parameters.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • field: number

      0 for progressive or field 1 and 1 for field 2

    • is_letterbox: boolean

      if true then bar data specifies letterbox, otherwise pillarbox

    • bar_data1: number

      If is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame

    • bar_data2: number

      If is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.

    Returns VideoBarMeta

  • Attaches GstVideoMeta metadata to buffer with the given parameters and the default offsets and strides for format and width x height.

    This function calculates the default offsets and strides and then calls gst_buffer_add_video_meta_full() with them.

    Parameters

    Returns VideoMeta

  • Attaches #GstVideoRegionOfInterestMeta metadata to buffer with the given parameters.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • roi_type: string

      Type of the region of interest (e.g. "face")

    • x: number

      X position

    • y: number

      Y position

    • w: number

      width

    • h: number

      height

    Returns VideoRegionOfInterestMeta

  • Attaches #GstVideoRegionOfInterestMeta metadata to buffer with the given parameters.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • roi_type: number

      Type of the region of interest (e.g. "face")

    • x: number

      X position

    • y: number

      Y position

    • w: number

      width

    • h: number

      height

    Returns VideoRegionOfInterestMeta

  • Attaches #GstVideoTimeCodeMeta metadata to buffer with the given parameters.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • fps_n: number

      framerate numerator

    • fps_d: number

      framerate denominator

    • latest_daily_jam: GLib.DateTime

      a #GDateTime for the latest daily jam

    • flags: VideoTimeCodeFlags

      a #GstVideoTimeCodeFlags

    • hours: number

      hours since the daily jam

    • minutes: number

      minutes since the daily jam

    • seconds: number

      seconds since the daily jam

    • frames: number

      frames since the daily jam

    • field_count: number

      fields since the daily jam

    Returns VideoTimeCodeMeta

  • Find the #GstVideoMeta on buffer with the given id.

    Buffers can contain multiple #GstVideoMeta metadata items when dealing with multiview buffers.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • id: number

      a metadata id

    Returns VideoMeta

  • Find the #GstVideoRegionOfInterestMeta on buffer with the given id.

    Buffers can contain multiple #GstVideoRegionOfInterestMeta metadata items if multiple regions of interests are marked on a frame.

    Parameters

    • buffer: Gst.Buffer

      a #GstBuffer

    • id: number

      a metadata id

    Returns VideoRegionOfInterestMeta

  • is_video_overlay_prepare_window_handle_message(msg: Gst.Message): boolean
  • navigation_event_get_coordinates(event: Gst.Event): [boolean, number, number]
  • Create a new navigation event for the given key mouse button press.

    Parameters

    • button: number

      The number of the pressed mouse button.

    • x: number

      The x coordinate of the mouse cursor.

    • y: number

      The y coordinate of the mouse cursor.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for the given key mouse button release.

    Parameters

    • button: number

      The number of the released mouse button.

    • x: number

      The x coordinate of the mouse cursor.

    • y: number

      The y coordinate of the mouse cursor.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for the new mouse location.

    Parameters

    • x: number

      The x coordinate of the mouse cursor.

    • y: number

      The y coordinate of the mouse cursor.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for the mouse scroll.

    Parameters

    • x: number

      The x coordinate of the mouse cursor.

    • y: number

      The y coordinate of the mouse cursor.

    • delta_x: number

      The x component of the scroll movement.

    • delta_y: number

      The y component of the scroll movement.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event signalling that all currently active touch points are cancelled and should be discarded. For example, under Wayland this event might be sent when a swipe passes the threshold to be recognized as a gesture by the compositor.

    Parameters

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for an added touch point.

    Parameters

    • identifier: number

      A number uniquely identifying this touch point. It must stay unique to this touch point at least until an up event is sent for the same identifier, or all touch points are cancelled.

    • x: number

      The x coordinate of the new touch point.

    • y: number

      The y coordinate of the new touch point.

    • pressure: number

      Pressure data of the touch point, from 0.0 to 1.0, or NaN if no data is available.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event signalling the end of a touch frame. Touch frames signal that all previous down, motion and up events not followed by another touch frame event already should be considered simultaneous.

    Parameters

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for a moved touch point.

    Parameters

    • identifier: number

      A number uniquely identifying this touch point. It must correlate to exactly one previous touch_start event.

    • x: number

      The x coordinate of the touch point.

    • y: number

      The y coordinate of the touch point.

    • pressure: number

      Pressure data of the touch point, from 0.0 to 1.0, or NaN if no data is available.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • Create a new navigation event for a removed touch point.

    Parameters

    • identifier: number

      A number uniquely identifying this touch point. It must correlate to exactly one previous down event, but can be reused after sending this event.

    • x: number

      The x coordinate of the touch point.

    • y: number

      The y coordinate of the touch point.

    • state: NavigationModifierType

      a bit-mask representing the state of the modifier keys (e.g. Control, Shift and Alt).

    Returns Gst.Event

  • navigation_event_parse_key_event(event: Gst.Event): [boolean, string]
  • Note: Modifier keys (as defined in #GstNavigationModifierType) press and release events are generated even if those states are present on all other related events

    Parameters

    • event: Gst.Event

      A #GstEvent to inspect.

    Returns [boolean, string]

  • navigation_event_parse_mouse_button_event(event: Gst.Event): [boolean, number, number, number]
  • Retrieve the details of either a #GstNavigation mouse button press event or a mouse button release event. Determine which type the event is using gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.

    Parameters

    • event: Gst.Event

      A #GstEvent to inspect.

    Returns [boolean, number, number, number]

  • navigation_event_parse_mouse_move_event(event: Gst.Event): [boolean, number, number]
  • Inspect a #GstNavigation mouse movement event and extract the coordinates of the event.

    Parameters

    • event: Gst.Event

      A #GstEvent to inspect.

    Returns [boolean, number, number]

  • navigation_event_parse_mouse_scroll_event(event: Gst.Event): [boolean, number, number, number, number]
  • Inspect a #GstNavigation mouse scroll event and extract the coordinates of the event.

    Parameters

    • event: Gst.Event

      A #GstEvent to inspect.

    Returns [boolean, number, number, number, number]

  • navigation_event_parse_touch_event(event: Gst.Event): [boolean, number, number, number, number]
  • Retrieve the details of a #GstNavigation touch-down or touch-motion event. Determine which type the event is using gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.

    Parameters

    • event: Gst.Event

      A #GstEvent to inspect.

    Returns [boolean, number, number, number, number]

  • navigation_event_parse_touch_up_event(event: Gst.Event): [boolean, number, number, number]
  • navigation_event_set_coordinates(event: Gst.Event, x: number, y: number): boolean
  • Try to set x and y coordinates on a #GstNavigation event. The event must be writable.

    Parameters

    • event: Gst.Event

      The #GstEvent to modify.

    • x: number

      The x coordinate to set.

    • y: number

      The y coordinate to set.

    Returns boolean

  • navigation_message_new_angles_changed(src: Gst.Object, cur_angle: number, n_angles: number): Gst.Message
  • Creates a new #GstNavigation message with type #GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application that the current angle, or current number of angles available in a multiangle video has changed.

    Parameters

    • src: Gst.Object

      A #GstObject to set as source of the new message.

    • cur_angle: number

      The currently selected angle.

    • n_angles: number

      The number of viewing angles now available.

    Returns Gst.Message

  • Creates a new #GstNavigation message with type #GST_NAVIGATION_MESSAGE_MOUSE_OVER.

    Parameters

    • src: Gst.Object

      A #GstObject to set as source of the new message.

    • active: boolean

      %TRUE if the mouse has entered a clickable area of the display. %FALSE if it over a non-clickable area.

    Returns Gst.Message

  • navigation_message_parse_angles_changed(message: Gst.Message): [boolean, number, number]
  • Parse a #GstNavigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED and extract the cur_angle and n_angles parameters.

    Parameters

    Returns [boolean, number, number]

  • Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_EVENT and extract contained #GstEvent. The caller must unref the event when done with it.

    Parameters

    Returns [boolean, Gst.Event]

  • navigation_message_parse_mouse_over(message: Gst.Message): [boolean, boolean]
  • Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_MOUSE_OVER and extract the active/inactive flag. If the mouse over event is marked active, it indicates that the mouse is over a clickable area.

    Parameters

    Returns [boolean, boolean]

  • navigation_query_new_angles(): Gst.Query
  • Create a new #GstNavigation angles query. When executed, it will query the pipeline for the set of currently available angles, which may be greater than one in a multiangle video.

    Returns Gst.Query

  • navigation_query_new_commands(): Gst.Query
  • navigation_query_parse_angles(query: Gst.Query): [boolean, number, number]
  • Parse the current angle number in the #GstNavigation angles query into the #guint pointed to by the cur_angle variable, and the number of available angles into the #guint pointed to by the n_angles variable.

    Parameters

    Returns [boolean, number, number]

  • navigation_query_parse_commands_length(query: Gst.Query): [boolean, number]
  • Parse the #GstNavigation command query and retrieve the nth command from it into cmd. If the list contains less elements than nth, cmd will be set to #GST_NAVIGATION_COMMAND_INVALID.

    Parameters

    • query: Gst.Query

      a #GstQuery

    • nth: number

      the nth command to retrieve.

    Returns [boolean, GstVideo.NavigationCommand]

  • navigation_query_set_angles(query: Gst.Query, cur_angle: number, n_angles: number): void
  • Set the #GstNavigation angles query result field in query.

    Parameters

    • query: Gst.Query

      a #GstQuery

    • cur_angle: number

      the current viewing angle to set.

    • n_angles: number

      the number of viewing angles to set.

    Returns void

  • video_afd_meta_api_get_type(): GType
  • video_affine_transformation_meta_api_get_type(): GType
  • video_affine_transformation_meta_get_info(): Gst.MetaInfo
  • video_bar_meta_api_get_type(): GType
  • Lets you blend the src image into the dest image

    Parameters

    • dest: VideoFrame

      The #GstVideoFrame where to blend src in

    • src: VideoFrame

      the #GstVideoFrame that we want to blend into

    • x: number

      The x offset in pixel where the src image should be blended

    • y: number

      the y offset in pixel where the src image should be blended

    • global_alpha: number

      the global_alpha each per-pixel alpha value is multiplied with

    Returns boolean

  • Scales a buffer containing RGBA (or AYUV) video. This is an internal helper function which is used to scale subtitle overlays, and may be deprecated in the near future. Use #GstVideoScaler to scale video buffers instead.

    Parameters

    • src: VideoInfo

      the #GstVideoInfo describing the video data in src_buffer

    • src_buffer: Gst.Buffer

      the source buffer containing video pixels to scale

    • dest_height: number

      the height in pixels to scale the video data in src_buffer to

    • dest_width: number

      the width in pixels to scale the video data in src_buffer to

    Returns [VideoInfo, Gst.Buffer]

  • video_calculate_display_ratio(video_width: number, video_height: number, video_par_n: number, video_par_d: number, display_par_n: number, display_par_d: number): [boolean, number, number]
  • Given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.

    Parameters

    • video_width: number

      Width of the video frame in pixels

    • video_height: number

      Height of the video frame in pixels

    • video_par_n: number

      Numerator of the pixel aspect ratio of the input video.

    • video_par_d: number

      Denominator of the pixel aspect ratio of the input video.

    • display_par_n: number

      Numerator of the pixel aspect ratio of the display device

    • display_par_d: number

      Denominator of the pixel aspect ratio of the display device

    Returns [boolean, number, number]

  • video_caption_meta_api_get_type(): GType
  • Takes src rectangle and position it at the center of dst rectangle with or without scaling. It handles clipping if the src rectangle is bigger than the dst one and scaling is set to FALSE.

    Parameters

    • src: VideoRectangle

      a pointer to #GstVideoRectangle describing the source area

    • dst: VideoRectangle

      a pointer to #GstVideoRectangle describing the destination area

    • scaling: boolean

      a #gboolean indicating if scaling should be applied or not

    Returns VideoRectangle

  • video_codec_alpha_meta_api_get_type(): GType
  • Converts the value to the #GstVideoColorMatrix The matrix coefficients (MatrixCoefficients) value is defined by "ISO/IEC 23001-8 Section 7.3 Table 4" and "ITU-T H.273 Table 4". "H.264 Table E-5" and "H.265 Table E.5" share the identical values.

    Parameters

    • value: number

      a ITU-T H.273 matrix coefficients value

    Returns VideoColorMatrix

  • video_color_matrix_get_Kr_Kb(matrix: VideoColorMatrix): [boolean, number, number]
  • Get the coefficients used to convert between Y'PbPr and R'G'B' using matrix.

    When:

    |[ 0.0 <= [Y',R',G',B'] <= 1.0) (-0.5 <= [Pb,Pr] <= 0.5)



    the general conversion is given by:

    |[
    Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
    Pb = (B'-Y')/(2*(1-Kb))
    Pr = (R'-Y')/(2*(1-Kr))

    and the other way around:

    |[ R' = Y' + Cr2(1-Kr) G' = Y' - Cb2(1-Kb)Kb/(1-Kr-Kb) - Cr2*(1-Kr)Kr/(1-Kr-Kb) B' = Y' + Cb2*(1-Kb)


    @param matrix a #GstVideoColorMatrix

    Parameters

    Returns [boolean, number, number]

  • Converts #GstVideoColorMatrix to the "matrix coefficients" (MatrixCoefficients) value defined by "ISO/IEC 23001-8 Section 7.3 Table 4" and "ITU-T H.273 Table 4". "H.264 Table E-5" and "H.265 Table E.5" share the identical values.

    Parameters

    Returns number

  • Converts the value to the #GstVideoColorPrimaries The colour primaries (ColourPrimaries) value is defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2". "H.264 Table E-3" and "H.265 Table E.3" share the identical values.

    Parameters

    • value: number

      a ITU-T H.273 colour primaries value

    Returns VideoColorPrimaries

  • Converts #GstVideoColorPrimaries to the "colour primaries" (ColourPrimaries) value defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2". "H.264 Table E-3" and "H.265 Table E.3" share the identical values.

    Parameters

    Returns number

  • Compute the offset and scale values for each component of info. For each component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the range [0.0 .. 1.0].

    The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert the component values in range [0.0 .. 1.0] back to their representation in info and range.

    Parameters

    Returns [number[], number[]]

  • Converts a raw video buffer into the specified output caps.

    The output caps can be any raw video formats or any image formats (jpeg, png, ...).

    The width, height and pixel-aspect-ratio can also be specified in the output caps.

    Parameters

    • sample: Sample

      a #GstSample

    • to_caps: Gst.Caps

      the #GstCaps to convert to

    • timeout: number

      the maximum amount of time allowed for the processing.

    Returns Sample

  • Converts a raw video buffer into the specified output caps.

    The output caps can be any raw video formats or any image formats (jpeg, png, ...).

    The width, height and pixel-aspect-ratio can also be specified in the output caps.

    callback will be called after conversion, when an error occurred or if conversion didn't finish after timeout. callback will always be called from the thread default %GMainContext, see g_main_context_get_thread_default(). If GLib before 2.22 is used, this will always be the global default main context.

    destroy_notify will be called after the callback was called and user_data is not needed anymore.

    Parameters

    • sample: Sample

      a #GstSample

    • to_caps: Gst.Caps

      the #GstCaps to convert to

    • timeout: number

      the maximum amount of time allowed for the processing.

    • callback: VideoConvertSampleCallback

      %GstVideoConvertSampleCallback that will be called after conversion.

    Returns void

  • video_crop_meta_api_get_type(): GType
  • video_event_is_force_key_unit(event: Gst.Event): boolean
  • Checks if an event is a force key unit event. Returns true for both upstream and downstream force key unit events.

    Parameters

    Returns boolean

  • video_event_new_downstream_force_key_unit(timestamp: number, stream_time: number, running_time: number, all_headers: boolean, count: number): Gst.Event
  • Creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.

    To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().

    Parameters

    • timestamp: number

      the timestamp of the buffer that starts a new key unit

    • stream_time: number

      the stream_time of the buffer that starts a new key unit

    • running_time: number

      the running_time of the buffer that starts a new key unit

    • all_headers: boolean

      %TRUE to produce headers when starting a new key unit

    • count: number

      integer that can be used to number key units

    Returns Gst.Event

  • video_event_new_still_frame(in_still: boolean): Gst.Event
  • Creates a new Still Frame event. If in_still is %TRUE, then the event represents the start of a still frame sequence. If it is %FALSE, then the event ends a still frame sequence.

    To parse an event created by gst_video_event_new_still_frame() use gst_video_event_parse_still_frame().

    Parameters

    • in_still: boolean

      boolean value for the still-frame state of the event.

    Returns Gst.Event

  • video_event_new_upstream_force_key_unit(running_time: number, all_headers: boolean, count: number): Gst.Event
  • Creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.

    running_time can be set to request a new key unit at a specific running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a new key unit as soon as possible.

    To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().

    Parameters

    • running_time: number

      the running_time at which a new key unit should be produced

    • all_headers: boolean

      %TRUE to produce headers when starting a new key unit

    • count: number

      integer that can be used to number key units

    Returns Gst.Event

  • Get timestamp, stream-time, running-time, all-headers and count in the force key unit event. See gst_video_event_new_downstream_force_key_unit() for a full description of the downstream force key unit event.

    running_time will be adjusted for any pad offsets of pads it was passing through.

    Parameters

    Returns [boolean, Gst.ClockTime, Gst.ClockTime, Gst.ClockTime, boolean, number]

  • video_event_parse_still_frame(event: Gst.Event): [boolean, boolean]
  • Parse a #GstEvent, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.

    Create a still frame event using gst_video_event_new_still_frame()

    Parameters

    Returns [boolean, boolean]

  • video_event_parse_upstream_force_key_unit(event: Gst.Event): [boolean, Gst.ClockTime, boolean, number]
  • Get running-time, all-headers and count in the force key unit event. See gst_video_event_new_upstream_force_key_unit() for a full description of the upstream force key unit event.

    Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()

    running_time will be adjusted for any pad offsets of pads it was passing through.

    Parameters

    Returns [boolean, Gst.ClockTime, boolean, number]

  • Converts a FOURCC value into the corresponding #GstVideoFormat. If the FOURCC cannot be represented by #GstVideoFormat, #GST_VIDEO_FORMAT_UNKNOWN is returned.

    Parameters

    • fourcc: number

      a FOURCC value representing raw YUV video

    Returns GstVideo.VideoFormat

  • video_format_from_masks(depth: number, bpp: number, endianness: number, red_mask: number, green_mask: number, blue_mask: number, alpha_mask: number): GstVideo.VideoFormat
  • Find the #GstVideoFormat for the given parameters.

    Parameters

    • depth: number

      the amount of bits used for a pixel

    • bpp: number

      the amount of bits used to store a pixel. This value is bigger than depth

    • endianness: number

      the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN

    • red_mask: number

      the red mask

    • green_mask: number

      the green mask

    • blue_mask: number

      the blue mask

    • alpha_mask: number

      the alpha mask, or 0 if no alpha mask

    Returns GstVideo.VideoFormat

  • Converts a #GstVideoFormat value into the corresponding FOURCC. Only a few YUV formats have corresponding FOURCC values. If format has no corresponding FOURCC value, 0 is returned.

    Parameters

    Returns number

  • Use info and buffer to fill in the values of frame. frame is usually allocated on the stack, and you will pass the address to the #GstVideoFrame structure allocated on the stack; gst_video_frame_map() will then fill in the structures with the various video-specific information you need to access the pixels of the video buffer. You can then use accessor macros such as GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(), GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc. to get to the pixels.

      GstVideoFrame vframe;
    ...
    // set RGB pixels to black one at a time
    if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
    guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
    guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
    guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);

    for (h = 0; h < height; ++h) {
    for (w = 0; w < width; ++w) {
    guint8 *pixel = pixels + h * stride + w * pixel_stride;

    memset (pixel, 0, pixel_stride);
    }
    }

    gst_video_frame_unmap (&vframe);
    }
    ...

    All video planes of buffer will be mapped and the pointers will be set in frame->data.

    The purpose of this function is to make it easy for you to get to the video pixels in a generic way, without you having to worry too much about details such as whether the video data is allocated in one contiguous memory chunk or multiple memory chunks (e.g. one for each plane); or if custom strides and custom plane offsets are used or not (as signalled by GstVideoMeta on each buffer). This function will just fill the #GstVideoFrame structure with the right values and if you use the accessor macros everything will just work and you can access the data easily. It also maps the underlying memory chunks for you.

    Parameters

    Returns [boolean, VideoFrame]

  • Use info and buffer to fill in the values of frame with the video frame information of frame id.

    When id is -1, the default frame is mapped. When id != -1, this function will return %FALSE when there is no GstVideoMeta with that id.

    All video planes of buffer will be mapped and the pointers will be set in frame->data.

    Parameters

    Returns [boolean, VideoFrame]

  • video_gl_texture_upload_meta_api_get_type(): GType
  • video_guess_framerate(duration: number): [boolean, number, number]
  • Given the nominal duration of one video frame, this function will check some standard framerates for a close match (within 0.1%) and return one if possible,

    It will calculate an arbitrary framerate if no close match was found, and return %FALSE.

    It returns %FALSE if a duration of 0 is passed.

    Parameters

    • duration: number

      Nominal duration of one frame

    Returns [boolean, number, number]

  • Return a generic raw video caps for formats defined in formats. If formats is %NULL returns a caps for all the supported raw video formats, see gst_video_formats_raw().

    Parameters

    Returns Gst.Caps

  • Return a generic raw video caps for formats defined in formats with features features. If formats is %NULL returns a caps for all the supported video formats, see gst_video_formats_raw().

    Parameters

    Returns Gst.Caps

  • video_meta_api_get_type(): GType
  • video_meta_transform_scale_get_quark(): Quark
  • video_multiview_get_doubled_height_modes(): any
  • video_multiview_get_doubled_size_modes(): any
  • video_multiview_get_doubled_width_modes(): any
  • video_multiview_get_mono_modes(): any
  • video_multiview_get_unpacked_modes(): any
  • video_multiview_guess_half_aspect(mv_mode: VideoMultiviewMode, width: number, height: number, par_n: number, par_d: number): boolean
  • video_overlay_composition_meta_api_get_type(): GType
  • video_overlay_composition_meta_get_info(): Gst.MetaInfo
  • This helper shall be used by classes implementing the #GstVideoOverlay interface that want the render rectangle to be controllable using properties. This helper will install "render-rectangle" property into the class.

    Parameters

    • oclass: GObject.ObjectClass

      The class on which the properties will be installed

    • last_prop_id: number

      The first free property ID to use

    Returns void

  • video_overlay_set_property(object: GObject.Object, last_prop_id: number, property_id: number, value: any): boolean
  • This helper shall be used by classes implementing the #GstVideoOverlay interface that want the render rectangle to be controllable using properties. This helper will parse and set the render rectangle calling gst_video_overlay_set_render_rectangle().

    Parameters

    • object: GObject.Object

      The instance on which the property is set

    • last_prop_id: number

      The highest property ID.

    • property_id: number

      The property ID

    • value: any

      The #GValue to be set

    Returns boolean

  • video_region_of_interest_meta_api_get_type(): GType
  • video_tile_get_index(mode: VideoTileMode, x: number, y: number, x_tiles: number, y_tiles: number): number
  • Get the tile index of the tile at coordinates x and y in the tiled image of x_tiles by y_tiles.

    Use this method when mode is of type %GST_VIDEO_TILE_TYPE_INDEXED.

    Parameters

    • mode: VideoTileMode

      a #GstVideoTileMode

    • x: number

      x coordinate

    • y: number

      y coordinate

    • x_tiles: number

      number of horizintal tiles

    • y_tiles: number

      number of vertical tiles

    Returns number

  • video_time_code_meta_api_get_type(): GType
  • Convert val to its gamma decoded value. This is the inverse operation of gst_video_color_transfer_encode().

    For a non-linear value L' in the range [0..1], conversion to the linear L is in general performed with a power function like:

    |[ L = L' ^ gamma



    Depending on `func,` different formulas might be applied. Some formulas
    encode a linear segment in the lower range.
    @param func a #GstVideoTransferFunction
    @param val a value

    Parameters

    Returns number

  • Convert val to its gamma encoded value.

    For a linear value L in the range [0..1], conversion to the non-linear (gamma encoded) L' is in general performed with a power function like:

    |[ L' = L ^ (1 / gamma)



    Depending on `func,` different formulas might be applied. Some formulas
    encode a linear segment in the lower range.
    @param func a #GstVideoTransferFunction
    @param val a value

    Parameters

    Returns number

  • Converts the value to the #GstVideoTransferFunction The transfer characteristics (TransferCharacteristics) value is defined by "ISO/IEC 23001-8 Section 7.2 Table 3" and "ITU-T H.273 Table 3". "H.264 Table E-4" and "H.265 Table E.4" share the identical values.

    Parameters

    • value: number

      a ITU-T H.273 transfer characteristics value

    Returns VideoTransferFunction

  • Returns whether from_func and to_func are equivalent. There are cases (e.g. BT601, BT709, and BT2020_10) where several functions are functionally identical. In these cases, when doing conversion, we should consider them as equivalent. Also, BT2020_12 is the same as the aforementioned three for less than 12 bits per pixel.

    Parameters

    • from_func: VideoTransferFunction

      #GstVideoTransferFunction to convert from

    • from_bpp: number

      bits per pixel to convert from

    • to_func: VideoTransferFunction

      #GstVideoTransferFunction to convert into

    • to_bpp: number

      bits per pixel to convert into

    Returns boolean

  • Converts #GstVideoTransferFunction to the "transfer characteristics" (TransferCharacteristics) value defined by "ISO/IEC 23001-8 Section 7.2 Table 3" and "ITU-T H.273 Table 3". "H.264 Table E-4" and "H.265 Table E.4" share the identical values.

    Parameters

    Returns number

Legend

  • Module
  • Object literal
  • Variable
  • Function
  • Function with type parameter
  • Index signature
  • Type alias
  • Type alias with type parameter
  • Enumeration
  • Enumeration member
  • Property
  • Method
  • Interface
  • Interface with type parameter
  • Constructor
  • Property
  • Method
  • Index signature
  • Class
  • Class with type parameter
  • Constructor
  • Property
  • Method
  • Accessor
  • Index signature
  • Inherited constructor
  • Inherited property
  • Inherited method
  • Inherited accessor
  • Protected property
  • Protected method
  • Protected accessor
  • Private property
  • Private method
  • Private accessor
  • Static property
  • Static method