SparkRTC is built on WebRTC. The major difference is the goal -- SparkRTC is designed for ultra low latency, with coordination across a series of modules in WebRTC. SparkRTC follows the license of WebRTC, and is open-source, free to use, and only for research purpose.
WebRTC is a free, open software project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimized to best serve this purpose.
Our mission: To enable rich, high-quality RTC applications to be developed for the browser, mobile platforms, and IoT devices, and allow them all to communicate via a common set of protocols.
The WebRTC initiative is a project supported by Google, Mozilla and Opera, amongst others.
First, be sure to install the depot_tools.
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH=/path/to/depot_tools:$PATH
Then, install x264.
git clone https://code.videolan.org/videolan/x264
./configure --enable-shared --enable-static
make
make install
For desktop development:
Clone the current repo:
git clone https://github.com/hkust-spark/sparkrtc.git
Enter the root directory of the repo:
cd ./sparkrtc
Sync with other WebRTC-related repos using the gclient tool installed before. This may take 10 to 20 minutes, depending on the network speed.
gclient sync
NOTICE: During your first sync, you’ll have to accept the license agreement of the Google Play Services SDK.
The checkout size is large due to the use of the Chromium build toolchain and many dependencies.
Ninja is the default build system for all platforms.
Ninja project files are generated using GN. They're put in a directory of your choice, like out/Debug, but you can use any directory for keeping multiple configurations handy.
To generate project files using the defaults (Debug build), run (in the root directory of the repo):
gn gen out/Default
See the GN documentation for all available options.
When you have Ninja project files generated (see previous section), compile using:
For Ninja project files generated in out/Default:
ninja -C out/Default
WebRTC contains several example applications, which can be found under src/webrtc/examples and src/talk/examples. Higher level applications are listed first.
Peerconnection consists of two applications using the WebRTC Native APIs:
-
A server application, with target name
peerconnection_server -
A client application, with target name
peerconnection_client(not currently supported on Mac/Android)
The client application has simple voice and video capabilities. The server enables client applications to initiate a call between clients by managing signaling messages generated by the clients.
Setting up P2P calls between peerconnection_clients: Start peerconnection_server. You should see the following message indicating that it is running:
Server listening on port 8888
Start any number of peerconnection_clients and connect them to the server. The client UI consists of a few parts:
Connecting to a server: When the application is started you must specify which machine (by IP address) the server application is running on. Once that is done you can press Connect or the return button.
Select a peer: Once successfully connected to a server, you can connect to a peer by double-clicking or select+press return on a peer’s name.
Video chat: When a peer has been successfully connected to, a video chat will be displayed in full window.
Ending chat session: Press Esc. You will now be back to selecting a peer.
Ending connection: Press Esc and you will now be able to select which server to connect to.
For more guidelines, see here.
We implemented a peerconnection_localvideo example modified from peerconnection_client on MacOS and Linux for testing purposes. It streams a local video sequence (YUV420) instead of capturing from cameras. The GUI is optionally removed for command line testing.
- Start peerconnection_server
./peerconnection_server
- Start the receiver with the filename of the received yuv file.
./peerconnection_localvideo --recon "recon.yuv"
By default, the GUI is turned off. Add --gui on receiver to open the rendered view:
./peerconnection_localvideo --gui --recon "recon.yuv"
If it's running on Windows system, the parameter need to use '=' to connect, e.g.
./peerconnection_localvideo --gui --recon="recon.yuv"
- Start the sender (must be started after the receiver) with information of the YUV file to be streamed.
./peerconnection_localvideo --file "input.yuv" --height 1080 --width 1920 --fps 24
Download and install ffmpeg and cv2.
sudo apt-get install -y ffmpegInstall cv2
pip install opencv-python
pip install opencv-contrib-pythonThe experiment code is located in the sparkrtc/my_experiment directory.
The video to be sent should be placed in the data directory. And the video should be in .yuv format.
For example:
sparkrtc/my_experiment/data/video_0a86_qrcode.yuv
We prepared a demo test video in data/Lecture.mp4, before use it, first run ffmpeg command to extract as .yuv format.
ffmpeg -i Lecture.mp4 Lecture.yuv
Using sparkrtc/my_experiment/code/run.sh can automatically build the connection and send the video, which makes the process easier.
Important: Configure IP Address
Before running experiments, you need to update the server IP address in the code to match your machine's public IP address:
- Open
my_experiment/code/run.sh - Find the
ip="YOUR PUBLIC IP ADDRESS"(around line 7) - Update the 'YOUR PUBLIC IP ADDRESS' variable to your machine's public IP address
Usage
First, you need to overlay QR codes on video frames to help the identification process after transmission.
./run.sh [-i <video_name>] -p gen_send_videoThen run send_and_recv, which will automatically send the video, generate sending data including delay and quality, and draw basic plots of the statistics data.
./run.sh [-i <video_name>] -p send_and_recvInput
The script provides following inputs:
• video_name: prefix of input yuv file
• size: (optional) customize input file width and height using -s, default is 1920x1080
Parameters
The script uses the following parameters:
• enable_mae: Default set to true. Makes the encoder more adaptive to network changes.
• disable_frame_drop: Default set to true. Different frame drops will influence the result fairness, so we disable it by default.
General Output Structure
The output results from the experiments are stored in the my_experiment/ directory with the following structure:
my_experiment/
├── data/
│ ├── <video_name>.yuv # Original video file
│ ├── <video_name>_qrcode.yuv # Video with QR codes overlaid
│ └── <video_name>.mp4 # Original video (MP4 format)
├── qrcode/
│ └── <video_name>/
│ ├── qrcode_<number>.png # Individual QR code images
│ └── qrcode_output.yuv # QR code video sequence
├── send/
│ └── <video_name>/
│ └── frame<frame_number>.png # Individual frames to be sent
├── result/
│ └── <video_name>/
│ ├── rec/ # Recording and reception logs
│ │ ├── recon.yuv # Reconstructed video
│ │ ├── recv.log # Receiver log
│ │ ├── send.log # Sender log
│ │ ├── start_stamp.log # Start timestamps
│ │ ├── end_stamp.log # End timestamps
│ │ ├── rate_timestamp.log # Rate with timestamps
│ │ ├── frame_size_original_timestamp.log # Frame sizes with timestamps
│ │ └── updated_recon_vmaf.json # VMAF analysis results (JSON)
│ ├── res/ # Result metrics and logs
│ │ ├── delay.log # Frame delay
│ │ ├── overall_delay.log # Overall delay per frame
│ │ ├── vmaf_score.log # VMAF scores
│ │ ├── rate.log # Transmission rate
│ │ ├── rate_with_frame_index.log # Rate with frame index
│ │ ├── frame_size.log # Frame sizes
│ │ ├── send2receive_index.log # Mapping from send to receive frame indices
│ │ └── receive_correspoding_index.log # Corresponding receive indices
│ └── fig/ # Generated figures
│ ├── vmaf.png # VMAF score plot
│ ├── overall_delay.png # Overall delay plot
│ └── rate_frame_size.png # Rate and frame size plot
└── file/
└── trace_logs/
└── <trace_name>.log # Network trace logsLogs and Metrics
-
Delay Log (
delay.log): • Directory:result/<video_name>/res/• Format:frame_index,frame_index_in_program,time_stamp_end,delay_ms• Description: Contains delay information for each frame, including frame indices and timestamps. -
Overall Delay Log (
overall_delay.log): • Directory:result/<video_name>/res/• Format:frame_index,delay_ms• Description: Contains overall delay values for each frame. -
VMAF Score Log (
vmaf_score.log): • Directory:result/<video_name>/res/• Format:frame_index,vmaf_score• Description: Contains VMAF (Video Multi-Method Assessment Fusion) quality scores for each frame. -
Rate Log (
rate.log): • Directory:result/<video_name>/res/• Format:timestamp,rate_kbps• Description: Contains transmission rate measurements at different timestamps.
Plots
Directory: result/<video_name>/fig/
-
VMAF Plot (
vmaf.png): • Description: Plot showing VMAF score vs frame_index. -
Overall Delay Plot (
overall_delay.png): • Description: Plot showing overall delay (ms) vs frame_index. -
Rate and Frame Size Plot (
rate_frame_size.png): • Description: Combined plot showing transmission rate and frame size vs frame_index.