moq-encoder-player

This project provides a minimal implementation (inside the browser) of a live video and audio encoder and video / audio player based on MOQT draft-04. The goal is to provide a minimal live platform implementation that helps learning on low latency trade offs and facilitates experimentation.

It is NOT optimized for performance / production at all since the 1st goal is experimenting / learning.

Main block diagram Fig1: Main block diagram

For the server/relay side we have used moxygen.

Note: You need to be careful and check that protocol versions implemented by this code and moxygen matches

Packager

It uses a variation of LOC as media packager.

Encoder

The encoder implements MOQT publisher role. It is based on Webcodecs, and AudioContext, see the block diagram in fig3

Encoder block diagram Fig3: Encoder block diagram

Note: We have used WebTransport, so the underlying transport is QUIC (QUIC streams to be more accurate)

Encoder - Config params

Video encoding config:

// Video encoder config
const videoEncoderConfig = {
        encoderConfig: {
            codec: 'avc1.42001e', // Baseline = 66, level 30 (see: https://en.wikipedia.org/wiki/Advanced_Video_Coding)
            width: 320,
            height: 180,
            bitrate: 1_000_000, // 1 Mbps
            framerate: 30,
            latencyMode: 'realtime', // Sends 1 chunk per frame
        },
        encoderMaxQueueSize: 2,
        keyframeEvery: 60,
    };

Audio encoder config:

// Audio encoder config
const audioEncoderConfig = {
        encoderConfig: {
            codec: 'opus', // AAC NOT implemented YET (it is in their roadmap)
            sampleRate: 48000, // To fill later
            numberOfChannels: 1, // To fill later
            bitrate: 32000,
            opus: { // See https://www.w3.org/TR/webcodecs-opus-codec-registration/
                frameDuration: 10000 // In us. Lower latency than default = 20000
            }
        },
        encoderMaxQueueSize: 10,
    };

Muxer config:

const muxerSenderConfig = {
        urlHostPort: '',
        urlPath: '',

        moqTracks: {
            "audio": {
                id: 0,
                namespace: "vc",
                name: "aaa/audio",
                maxInFlightRequests: 100,
                isHipri: true,
                authInfo: "secret"
            },
            "video": {
                id: 1,
                namespace: "vc",
                name: "aaa/video",
                maxInFlightRequests: 50,
                isHipri: false,
                authInfo: "secret"
            }
        },
    }

src_encoder/index.html

Main encoder webpage and also glues all encoder pieces together

utils/TimeBufferChecker

Stores the frames timestamps and the wall clock generation time from the raw generated frames. That allows us keep track of each frame / chunk creation time (wall clock)

capture/v_capture.js

WebWorker that waits for the next RGB or YUV video frame from capture device, augments it adding wallclock, and sends it via post message to video encoder

capture/a_capture.js

WebWorker Receives the audio PCM frame (few ms, ~10ms to 25ms of audio samples) from capture device, augments it adding wallclock, and finally send it (doing copy) via post message to audio encoder

encode/v_encoder.js

WebWorker Encodes RGB or YUV video frames into encoded video chunks

Note: We configure VideoEncoder in realtime latency mode, so it delivers a chunk per video frame

encode/a_encoder.js

WebWorker Encodes PCM audio frames (samples) into encoded audio chunks

Note: opus.frameDuration setting helps keeping encoding latency low

packager/loc_packager.js

LOC packager format

Fig4: LOC header structure

sender/moq_sender.js

WebWorker Implements MOQT and sends video and audio packets (see loc_packager.js) to the server / relay following MOQT and a variation of LOC

Player

The encoder implements MOQT subscriber role. It uses Webcodecs and AudioContext / Worklet, SharedArrayBuffer, and Atomic

Player block diagram Fig5: Player block diagram

Audio video sync strategy

To keep the audio and video in-sync the following strategy is applied:

receiver/moq_demuxer_downloader.js

WebWorker Implements MOQT and extracts video and audio packets (see loc_packager.js) from the server / relay following MOQT and a variation of LOC

utils/jitter_buffer.js

Since we do not have any guarantee that QUIC streams are delivered in order we need to order them before sending them to the decoder. This is the function of the deJitter. We create one instance per track, in this case one for Audio, one for video

decode/audio_decoder.js

WebWorker when it receives and audio chunk it decodes it and it sends the audio PCM samples to the audio renderer. AudioDecoder does NOT track timestamps on decoded data, it just uses the 1st one sent and at every decoded audio sample adds 1/fs (so sample time). That means if we drop and audio packet those timestamps will be collapsed creating A/V out of sync. To work around that problem we calculate all the audio GAPs duration timestampOffset and we publish that to allow other elements in the pipeline to have accurate idea of live head position

render/audio_circular_buffer.js

Leverages SharedArrayBuffer and Atomic to implement following mechanisms to share data in a “multi thread” environment:

render/source_buffer_worklet.js

AudioWorkletProcessor, implements an audio source Worklet that sends audio samples to renderer.

decode/video_decoder.js

WebWorker, Decodes video chunks and sends the decoded data (YUV or RGB) to the next stage (video_render_buffer.js)

render/video_render_buffer.js

Buffer that stores video decoded frames

Latency measurement based in video data

We can activate the option “Activate latency tracker (overlays data on video)” in the encoder (CPU consuming), this options will add the epoch ms clock of the encoder in the video frame as soon as it is received from the camera. It replaces the first video lines with that clock information. It is also encoded in a way that is resilient to video processing / encoding / decoding operations (see ./overlay_processor/overlay_encoder.js and ./overlay_processor/overlay_decoder.js in the code)

The player will decode that info from every frame and when it is about to show that frame it will calculate the latency by: latency_ms = now_in_ms - frame_capture_in_ms.

Note: This assumes the clocks of the encoder and the decoder are in-sync. Always true if you use same computer to encode and decode

Legacy latency measurement

Note: Encoder and Player clock have to be in sync for this metric to be accurate. If you use same computer as encoder & player then metric should be pretty accurate

Local testing (localhost)

git clone git@github.com:facebookexperimental/moq-encoder-player.git
./start-http-server-cross-origin-isolated.py

Note: You need to use this script to run the player because it adds some needed headers (more info here)

ENJOY YOUR POCing!!! :-)

Encoder UI Fig6: Encoder UI

Player UI Fig7: Player UI

Note: This is an experimentation code, we plan the evolve it quick, so those screenshots could be a bit outdated

TODO

License

moq-encoder-player is released under the MIT License.