A high-throughput, fault-tolerant distributed task processing system implementing the Reliable Queue Pattern.
Designed to handle concurrent workloads with zero data loss, ensuring robustness in distributed environments through atomic state transitions and graceful lifecycle management.
The system decouples task ingestion from processing using a persistent Redis layer, managed by a scalable pool of concurrent workers.
flowchart LR
Client[Client / External Service] --> |POST /task| API[REST API Producer]
API --> |LPUSH JSON| Redis[(Redis Queue)]
subgraph "Worker Cluster"
W1[Worker 1]
W2[Worker 2]
W3[Worker N]
end
Redis <--> |Atomic BRPopLPush| W1
Redis <--> |Atomic BRPopLPush| W2
Redis <--> |Atomic BRPopLPush| W3
Implements the Reliable Queue Pattern to guarantee at-least-once processing.
- Tasks are atomically moved from
tasks:pendingtotasks:processingusingBRPopLPush. - This ensures that if a worker crashes mid-execution, the task remains in the
processinglist and is not lost.
- Automatic Retries: Failed tasks are retried up to 3 times with exponential backoff.
- Dead Letter Queue (DLQ): Tasks that exceed the retry limit are moved to a
dead_letterqueue for manual inspection, preventing poison pills from clogging the system.
- Live Dashboard: A web-based dashboard provides real-time visibility into queue depths (Pending, Processing, Dead Letter).
- JSON API: Exposes metrics via a simple JSON endpoint for external tools.
Leverages Go's scheduler and efficient goroutines to maximize throughput.
- Worker Pools: Spawns multiple concurrent processors managed via
sync.WaitGroup. - Non-blocking I/O: Efficiently handles idle waiting on Redis connections.
Implements robust signal handling (SIGINT, SIGTERM) using os/signal and context cancellation to ensure all in-flight tasks complete execution before termination.
- Docker & Docker Compose
- Go 1.23+
Initialize the Redis instance.
docker-compose up -d redisRun each service in a separate terminal:
Producer API (Port 8085)
go run cmd/producer/main.go
# Starts HTTP server on :8085Monitor Dashboard (Port 8082)
go run cmd/monitor/main.go
# Dashboard available at http://localhost:8082Worker Pool
go run cmd/worker/main.go
# Starts the worker nodeYou can send tasks manually or run the stress test script to simulate load.
Using the Stress Test Script: This script sends a burst of concurrent requests to the producer.
go run scripts/stress_load/main.goUsing cURL:
curl -X POST http://localhost:8085/task \
-H "Content-Type: application/json" \
-d '{"type": "email-notification", "payload": "user@example.com"}'Response:
{
"status": "queued",
"task_id": "1734301548123456789"
}POST /task: Enqueues a new task.- Body:
{"type": "string", "payload": "any"} - Returns:
202 Acceptedwith Task ID.
- Body:
GET /: HTML Dashboard.GET /stats: JSON metrics (pending,processing,dead_lettercounts).
- Metrics Export: Prometheus integration for queue depth and latency monitoring.
- Dynamic Scaling: Horizontal Pod Autoscaling (HPA) based on queue lag.