Skip to content

Mediacapture-transform Timing Model #88

@aboba

Description

@aboba

This is an overall tracking issue for timing-related issues in mediacapture-transform.

Questions:

  1. What timing information is provided by various sources (capture, canvas, etc.)
  2. What VideoFrame/AudioData attributes correspond to that timing information?
  3. What timing attributes are maintained between Video/AudioFrames and encodedChunks (encoder) and between encodedChunks and Video/AudioFrames (decoder)?
  4. What is done with this timing information in the Web platform? For example, how are MSTs sync'd within a MediaStream?

Timing Model discussed at November VI

VideoFrameCallbackMetaData in rVFC specification includes receiveTime, captureTime, rtpTimestamp, mediaTime.

Resolution: “file specific issues on specific specs”

Related: Issue 601: Expose in VideoFrame

Use case: https://www.w3.org/TR/webrtc-nv-use-cases/#auction

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions