Bea.AI design blog
  • System design algorithms
    • Consistant Hashing, Bloom Filter, SkipLists, B-Tree, LRU/LFU
    • Reverse index, Inverted index, Trie, Rsync, Merkle tree
    • Leaky bucket/Token bucket, GeoHash, Quadtree, Leader election, Consensus
    • Time sync, Erasure coding, Message digest, Atomic commit, Mutual exclusion
    • Global state collection, Gossip, Replica management, Self-stabilization, HyperLoglog
    • Count-min Sketch, Hierarchial timing, Operational transform, Last write Wins, Vector clocks
  • Systems design
    • Metrics monitor & alart system
    • API gateway
    • Distributed Key-Value Storage
    • Distributed notification system
    • Task Scheduler
    • Elevator System
  • General Design Templates
    • System Design Blueprint
  • Design topics
    • Topics 1
    • Topics 2
    • Topics 3
    • Topics 4
    • Topics 5
    • Topics 6
    • Topics 7
    • Topics 8
    • Topics 9
    • Topics 10
    • Topics 11
    • Topics 12
    • Topics 13
    • Topics 14
    • Topics 15
    • Topics 16
    • Topics 17
    • Topics 18
    • Topics 19
    • Topics 20
    • Topics 21
    • Topics 22
    • Topics 23
  • System design interview steps & template
  • Typical systems and tips
  • Behaviour Questions
  • Roles requirement
    • SDE-traffic-apple
    • SDE-tools-linkedin
  • Common Systems to use in system design
    • Kafka
    • Flink
    • InfluxDB & Prometheus
    • Kubernetes & Docker
    • Zoomkeeper & Etcd
    • Redis
    • Distributed transaction
  • Design Patterns and Use Scenarios
    • Pattern to creating objects
    • Object Assembling
    • Object Interaction / Responsibility
  • Micro-service network / Gateway
    • Basic concept
    • Performance analysis & optimization
    • Open source techs
  • Systems
    • Distributed Priority Queue
    • Design a Live Video Streaming Platform
Powered by GitBook
On this page
  1. Systems

Design a Live Video Streaming Platform

PreviousDistributed Priority Queue

Last updated 1 year ago

Today, we’re going to design a Live Video Streaming Platform and cover the following components:

  1. Real time video ingesting

    1. Routing

    2. Transcoding

  2. Video Delivery

    1. Distribution

    2. Playback

Below is a high level diagram of the whole system:

Video Ingestion

The first stage in the live streaming process involves the capture and ingestion of live video content. At the production site, live video feeds are captured using cameras and then encoded into a digital format suitable for streaming over the internet or direct connection to the data center.

To support real-time streaming of 1-second video fragments, the encoder must be configured to segment the video into 1-second chunks.

The encoded video fragments are then pushed to the core datacenter responsible for transcoding of the video. This push can be facilitated using RTMP (Real-Time Messaging Protocol)

Routing

The Media Proxy acts as the data gateway. It operates across all Points of Presence (PoPs). It processes live video streams from broadcasters by extracting key stream properties and directing them to appropriate core regions.

The routing service is a configurable, stateful service designed for rule-based routing. For instance, it can ensure streams from specific channels are always sent to a designated origin, catering to unique processing needs, or direct multiple related streams to a single origin for scenarios like premium broadcasts with primary and backup feeds.

Redundant Network

Popular events use private, guaranteed bandwidth paths with geographic redundancy, employing dual fiber-optic routes or a combination of fiber-optic and satellite backup.

To support high-quality broadcasts, we have to use a dedicated broadcast hardware with managed encoders, connected to our data centers via dedicated links.

We should also have a system to seamlessly integrate primary and secondary streams into our infrastructure

Processing and Transcoding

Once we receive 1-second video fragments, the core datacenter's primary responsibility is to transcode these fragments into multiple resolutions and codecs.

This step allows users to enjoy the best possible viewing experience regardless of their device or bandwidth limitations. The transcoding process involves converting the original video fragments into various formats, such as 1080p, 720p, 480p, and 360p resolutions, and using codecs like AVC (Advanced Video Coding), HLS, and DASH.

This transcoding process is resource-intensive and requires dedicated transcoding cluster within the datacenter, equipped with powerful GPUs. It should support auto-scaling to handle varying loads dynamically, ensuring that the transcoding of 1-second video fragments is completed swiftly and made available for distribution without delay.

Video Distribution

Once transcoded, the video fragments are then prepared for distribution to the end-users. This involves packaging the fragments into adaptive bitrate streaming formats like HLS or MPEG-DASH, which allows the video player on the user's device to select the appropriate stream based on their current network conditions.

The packaged video streams are distributed through a Content Delivery Network (CDN), which caches the content at edge locations closer to the users, reducing latency and improving the streaming experience.

The CDN insures the scalability of the video streaming platform, as it offloads the traffic from the core datacenter and provides a geographically distributed network to serve the video content from locations nearest to the end-users. This setup minimizes the distance the data travels, reducing latency and buffering times.

Playback

On the user's side, a video player capable of handling adaptive bitrate streaming formats (HLS or MPEG-DASH) is required for playback. This player adapts to changing network conditions, seamlessly switches between different resolutions, and maintains smooth playback without interruptions.