A Go microservice that encodes MP4 videos into MPEG-DASH format for adaptive bitrate streaming. It consumes encoding jobs from RabbitMQ, processes them using Bento4, and supports both local filesystem and Google Cloud Storage for input/output — so you can run the full pipeline locally without any cloud dependencies.
┌──────────────┐
│ RabbitMQ │
│ (job queue) │
└─────┬────────┘
│
▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ Local / GCS │──get───│ Video Encoder │──put──│ Local / GCS │
│ (input) │ │ Workers │ │ (output) │
└─────────────┘ └────────┬────────┘ └─────────────┘
│
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
mp4fragment mp4dash PostgreSQL
(Bento4) (Bento4) (job state)
Each video goes through the following stages:
DOWNLOADING → FRAGMENTING → ENCODING → UPLOADING → FINISHING → COMPLETED
- Download — Fetch the source MP4 from local filesystem or GCS
- Fragment — Run
mp4fragmentto prepare the file for DASH encoding - Encode — Run
mp4dashto generate the MPEG-DASH manifest and segments - Upload — Copy encoded files to the output directory or upload to GCS
- Finish — Clean up local temporary files
- Notify — Publish the result (success or error) to the RabbitMQ notification exchange
If any step fails, the job transitions to FAILED, the error is recorded, and the original message is sent to the Dead Letter Exchange.
- Go 1.14 — Application runtime
- Bento4 (
mp4fragment,mp4dash) — MPEG-DASH video encoding - FFmpeg — Media processing utilities
- RabbitMQ — Async job queue and result notifications
- PostgreSQL — Job and video state persistence
- Local filesystem (default) or Google Cloud Storage — Video input/output storage
- Docker and Docker Compose
For GCS mode only:
- A GCP service account with read/write access to Google Cloud Storage
cp .env.example .envEdit .env with your actual values. See Configuration for details.
By default, the project uses local filesystem storage — no cloud credentials needed.
docker-compose up -dThis starts the app container, PostgreSQL, and RabbitMQ.
Open the RabbitMQ management UI at http://localhost:15672 (user: rabbitmq, pass: rabbitmq) and:
- Create a fanout exchange to serve as the Dead Letter Exchange (e.g.
dlx) - Create a queue and bind it to that exchange (no routing key needed)
- Make sure the
RABBITMQ_DLXvalue in.envmatches the exchange name
docker exec <container_name> make serverFind the container name with docker ps. By default it will be something like video-encoder_app_1.
Set STORAGE_TYPE=gcs in .env, configure INPUT_BUCKET_NAME and OUTPUT_BUCKET_NAME, and place your GCP credentials file in the project root:
cp /path/to/your-credentials.json ./bucket-credential.json| Variable | Description | Default |
|---|---|---|
DB_TYPE |
Database driver | postgres |
DSN |
Database connection string | — |
DB_TYPE_TEST |
Test database driver | sqlite3 |
DSN_TEST |
Test database connection string | :memory: |
ENV |
Environment name | dev |
DEBUG |
Enable debug logging | true |
AUTO_MIGRATE_DB |
Auto-migrate database schema on startup | true |
LOCAL_STORAGE_PATH |
Temp directory for video processing | /tmp |
STORAGE_TYPE |
Storage backend: local or gcs |
local |
INPUT_BUCKET_NAME |
GCS bucket for source videos (GCS mode) | — |
INPUT_LOCAL_PATH |
Directory with source videos (local mode) | — |
OUTPUT_BUCKET_NAME |
GCS bucket or local output directory | — |
CONCURRENCY_WORKERS |
Number of parallel job workers | 1 |
CONCURRENCY_UPLOAD |
Number of parallel upload goroutines | 50 |
RABBITMQ_DEFAULT_USER |
RabbitMQ username | rabbitmq |
RABBITMQ_DEFAULT_PASS |
RabbitMQ password | rabbitmq |
RABBITMQ_DEFAULT_HOST |
RabbitMQ host | rabbit |
RABBITMQ_DEFAULT_PORT |
RabbitMQ port | 5672 |
RABBITMQ_DEFAULT_VHOST |
RabbitMQ virtual host | / |
RABBITMQ_CONSUMER_NAME |
Consumer identifier | app-name |
RABBITMQ_CONSUMER_QUEUE_NAME |
Queue to consume jobs from | videos |
RABBITMQ_NOTIFICATION_EX |
Exchange for result notifications | amq.direct |
RABBITMQ_NOTIFICATION_ROUTING_KEY |
Routing key for notifications | jobs |
RABBITMQ_DLX |
Dead Letter Exchange name | dlx |
GOOGLE_APPLICATION_CREDENTIALS |
Path to GCS service account JSON (GCS mode) | — |
{
"resource_id": "my-resource-id-can-be-a-uuid-type",
"file_path": "video.mp4"
}resource_id— Your identifier for the video (string)file_path— Path to the MP4 file inside the input bucket
Published to the notification exchange on successful encoding:
{
"id": "bbbdd123-ad05-4dc8-a74c-d63a0a2423d5",
"output_bucket_path": "my-output-bucket",
"status": "COMPLETED",
"video": {
"encoded_video_folder": "b3f2d41e-2c0a-4830-bd65-68227e97764f",
"resource_id": "aadc5ff9-0b0d-13ab-4a40-a11b2eaa148c",
"file_path": "video.mp4"
},
"Error": "",
"created_at": "2020-05-27T19:43:34.850479-04:00",
"updated_at": "2020-05-27T19:43:38.081754-04:00"
}The encoded_video_folder contains the MPEG-DASH manifest and segments in the output bucket.
Published to the notification exchange when encoding fails:
{
"message": {
"resource_id": "aadc5ff9-010d-a3ab-4a40-a11b2eaa148c",
"file_path": "video.mp4"
},
"error": "reason for the error"
}The original message is also routed to the Dead Letter Exchange.
make test├── domain/ # Domain entities (Video, Job)
├── application/
│ ├── repositories/ # Database access layer
│ └── services/ # Business logic (encoding pipeline, workers, storage)
├── framework/
│ ├── cmd/server/ # Application entrypoint
│ ├── database/ # Database connection and migrations
│ ├── queue/ # RabbitMQ integration
│ └── utils/ # JSON validation helpers
├── Dockerfile # Container image with Bento4 and FFmpeg
├── docker-compose.yaml # Local dev environment
├── Makefile # Build and run targets
└── .env.example # Environment variable template