Skip to content

A very basic lightweight, concurrent job queue system built in Go (Golang), featuring retry logic, customizable job types, RESTful API endpoints, and a simple React/TypeScript dashboard for monitoring and control. Meant to mimic the functionality of real-life Job Queue services e.g., Celery. At its core, it implements the Producer-Consumer Model.

Notifications You must be signed in to change notification settings

timan-z/GoQueue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 

Repository files navigation

GoQueue

Author: Timan Zheng
Date: 8/9/2025
Description: A Concurrent Producer-Consumer Job Queue in Go (Golang)

Overview

GoQueue is a lightweight concurrent job queue implemented in Go, built primarily to explore and learn Go's concurrency primitives — that being: goroutines, channels, and mutexes — through a concrete system design.

It features retry logic, customizable job types, RESTful API endpoints, and a simple React-TypeScript dashboard for monitoring and control. While this is a for-learning project, it is loosely modeled after real-world job/task queue application systems like Celery and Sidekiq (capturing the core functionality of these programs).

At its core, this project is an implementation of the Producer–Consumer model:

  • A Producer accepts incoming jobs via an HTTP API and enqueues them.

  • Multiple Worker Threads (Consumers) concurrently process jobs from a shared queue.

  • A central Queue coordinates task delivery and safely tracks job state.


Project Legacy

GoQueue demonstrates production-style concurrency patterns — leveraging goroutines, channels, and mutex locking — while providing a clear, extensible architecture for asynchronous task processing.

While not extended directly, GoQueue would serve as the basis for my SpringQueue project — which was a one-to-one translation of my GoQueue project refactored for Java-Spring Boot with changes made to make the successr more idiomatically Java.

  • It followed the same concurrency model only using ExecutorService instead of Go-native concurrency primitives.

SpringQueue itself would serve as the basis for SpringQueuePro, an advanced evolution of the former that transformed a minimalist, tightly-coupled and in-memory concurrency prototype into a production-grade, distributed task processing system with PostgreSQL persistence, Redis-based distributed locking and caching, JWT security, metrics, and many more advanced features.

  • [SpringQueuePro] is the personal project that, as of writing this documentation, I am the most proud of.

Core Concepts

Producer–Consumer Model

The system follows the classic Producer–Consumer pattern:

HTTP Client → Producer → Buffered Channel → Workers
  • The Producer receives tasks via REST endpoints and pushes them into a buffered channel.

  • Workers run as goroutines that continuously consume tasks from the channel.

  • The channel naturally provides backpressure and synchronization.

  • A shared job registry is protected using a mutex to avoid race conditions.


Architecture

Backend Components

1. Queue

The Queue (see queue.go) is the central coordination structure.

type Queue struct {
    Tasks chan *task.Task
    Jobs  map[string]*task.Task
    mu    sync.RWMutex
}

Responsibilities

  • Maintains a buffered chan Task for worker consumption
  • Tracks all jobs in an in-memory map (Jobs)
  • Uses sync.RWMutex to prevent concurrent map writes
  • Supports enqueue, lookup, deletion, and clearing of jobs

2. Producer (HTTP API)

The Producer (see producer.go) exposes an HTTP interface that allows external clients (Postman or the frontend) to enqueue jobs in real time.

Key responsibilities

  • Accept job creation requests
  • Validate and enqueue tasks
  • Provide job inspection and management endpoints
  • Acts as the entry point into the queue

Example endpoints

POST   /api/enqueue
GET    /api/jobs
DELETE /api/jobs/{id}
POST   /api/clear

3. Workers (Consumers)

Workers are long-running goroutines that continuously pull tasks from the queue’s channel (see worker.go).

for t := range tasks {
    // process task
}

Worker behavior

  • Each worker runs concurrently as its own goroutine
  • Tasks are processed based on their type
  • Simulated execution time using time.Sleep
  • Supports retry logic and permanent failure states

Example task types

  • email – simulated work
  • takes-long – long-running task
  • fails – fails and retries until max retries reached

There are many other arbitrary types with different static processing times.


Concurrency Model

This project intentionally uses idiomatic Go concurrency:

Concept Usage
Goroutines Each worker runs concurrently
Channels Task queue + synchronization
Mutexes Protect shared job state
Buffered Channels Allow producer to enqueue without immediate blocking

This combination allows:

  • Safe concurrent execution

  • Natural load distribution across workers

  • Clean separation of responsibilities


Program Flow & Backpressure

High-Level Runtime Flow

At runtime, GoQueue operates as a continuously running Producer–Consumer system:

  1. The application boots and initializes:
    • A shared Queue (buffered channel + job registry)
    • A fixed pool of worker goroutines
  2. Workers immediately block, waiting for tasks to arrive.
  3. Producers enqueue tasks via HTTP endpoints.
  4. Tasks flow through the channel to workers.
  5. Workers process tasks and update job state.
  6. Failed tasks may be retried or permanently marked as failed.

This model ensures safe concurrency, natural flow control, and deterministic task ownership.


How Backpressure Works in GoQueue

GoQueue does not implement explicit rate limiting or queue throttling. Instead, it relies on Go’s native channel blocking semantics, which provides implicit backpressure.

Key Mechanics

  • The task queue is implemented as a buffered channel:

    Tasks chan *task.Task
  • When a producer attempts to enqueue a task:

    • If the channel has available capacity → enqueue succeeds immediately
    • If the channel buffer is full → the producer blocks
    • The producer remains blocked until a worker consumes a task

This means:

  • The system cannot be overwhelmed beyond its configured capacity
  • Producers naturally slow down under load
  • No tasks are dropped or overwritten

Worker Blocking Behavior

Workers execute a continuous loop:

for task := range queue.Tasks {
    process(task)
}

When no tasks are available:

  • Workers block automatically
  • No CPU spinning or polling occurs
  • Goroutines remain idle until a task arrives

This results in:

  • Efficient CPU usage
  • Clean synchronization between producers and consumers
  • Deterministic task handoff

Mutex Role (State Safety, Not Backpressure)

The sync.RWMutex does not participate in backpressure.

Its role is strictly to:

  • Protect the shared Jobs map
  • Prevent concurrent writes and race conditions
  • Ensure consistent job state transitions

Backpressure is handled entirely by channel semantics, not by locking.


Task Lifecycle

Each task in GoQueue follows a clearly defined lifecycle from creation to termination.

1. Task Creation (Producer)

  • A client sends an HTTP request to enqueue a task
  • A Task object is created with:
    • Unique ID
    • Type
    • Initial status (queued)
    • Retry metadata
  • The task is:
    • Stored in the Jobs map (mutex-protected)
    • Pushed into the Tasks channel

2. Task Queued (Waiting State)

  • The task resides in the buffered channel
  • It waits until a worker becomes available
  • No CPU resources are consumed while waiting

3. Task Claimed (Worker Pickup)

  • A worker goroutine receives the task from the channel
  • The worker:
    • Marks the task as in-progress
    • Begins simulated processing based on task type

4. Task Execution

  • Execution is simulated using time.Sleep
  • Behavior varies by task type:
    • Fast tasks
    • Long-running tasks
    • Failing tasks

5. Task Completion or Failure

On success

  • Task is marked completed
  • Worker loop continues, waiting for next task

On failure

  • Retry count is incremented
  • If retries remain:
    • A new task instance is enqueued (retry-as-new)
  • If retries are exhausted:
    • Task is marked failed

6. Task Terminal State

Once a task reaches:

  • completed or
  • failed

…it becomes immutable and is no longer processed.


Worker Lifecycle (Mermaid Diagram)

The following diagram illustrates the lifecycle of a single Worker goroutine in GoQueue:

stateDiagram-v2
    [*] --> WorkerStarted

    WorkerStarted --> WaitingForTask : goroutine launched
    WaitingForTask --> ProcessingTask : task received from channel

    ProcessingTask --> TaskCompleted : success
    ProcessingTask --> TaskFailed : error

    TaskFailed --> RetryingTask : retries remaining
    RetryingTask --> WaitingForTask : re-enqueued as new task

    TaskFailed --> PermanentFailure : retries exhausted
    PermanentFailure --> WaitingForTask

    TaskCompleted --> WaitingForTask
Loading

Key Observations

  • Workers never terminate during normal operation
  • Workers are either:
    • Blocked waiting for tasks
    • Actively processing a task
  • Failure does not kill the worker
  • Retry logic is task-scoped, not worker-scoped

This makes workers:

  • Stateless
  • Reusable
  • Resilient to task-level failure

Features

  • Concurrent job processing using Go workers and goroutines
  • Configurable retry logic (with exponential backoff possible)
  • Multiple simulated job types (email, image processing, etc.) with varied execution times
  • RESTful API for enqueueing, viewing, retrying, and deleting jobs
  • Thread-safe in-memory queue with mutex protection
  • Retry as new job (Celery-like behavior)

Skills & Concepts Demonstrated

  • Goroutines & Channels – Worker pool design with buffered/unbuffered channels
  • Mutexes & Concurrency Safety – Protecting shared state from race conditions
  • RESTful API Design – Clean, resource-based endpoints in Go
  • Task Lifecycle Management – Status tracking (queued, in-progress, failed, completed)
  • Retry Strategies – Requeueing as a cloned job, similar to Celery/RQ
  • Frontend Integration – Connecting a Go backend to a React/TypeScript UI

Frontend

A minimal React + TypeScript frontend is included to visualize job state.

Features

  • View queued, completed, and failed jobs
  • Trigger job creation via API
  • Poll backend for updates

The frontend is intentionally minimal — the focus of this project is the backend concurrency model.


Running the Project (Locally)

Backend

cd backend
go run main.go

Server starts on (or whatever the default configuration on your machine is):

http://localhost:8080

Frontend (optional — you can play around with this project entirely through Postman, etc)

cd frontend
npm install
npm run dev

Design Goals

This project was built with the following goals in mind:

  • Learn Go concurrency primitives hands-on
  • Implement a clean Producer–Consumer system
  • Avoid frameworks to better understand fundamentals
  • Build a system that could evolve into a production queue

Limitations

This is intentionally a learning project, not a production system:

  • No persistence (in-memory only)
  • No distributed workers
  • No authentication or rate limiting
  • No fault-tolerant recovery

These limitations were later addressed in SpringQueuePro using:

  • Spring Boot
  • PostgreSQL / Redis
  • Distributed locking
  • Metrics and load testing

In an ideal world, I would have evolved this project into the production-grade job queue that would be SpringQueuePro (that is, I would be making GoQueuePro instead. Alas, I sorely overestimated the job market availability for Go developers compared to legacy languages like Java. That said, I wanted to pick up Spring Boot and I rather enjoy it).

  • Maybe some day I'll get around to GoQueuePro. For now though, this barebones job queue has kind of become my quick template project for quickly learning the fundamentals of programming languages I'm new to (e.g., Go) or rusty on (e.g., Java) as well as frameworks (e.g., Spring Boot).

  • Both PyQueue (Python) and NetQueue (C# and .Net) are bound to happen at some point down the line when I have an evening to myself and nothing better to do.


Why This Project Matters

From an engineering perspective, this project demonstrates:

  • Understanding of concurrency fundamentals
  • Practical application of Producer–Consumer patterns
  • Awareness of race conditions and synchronization
  • Ability to evolve simple systems into scalable architectures

But for myself personally, GoQueue represents a conceptual prototype that affirmed a concrete design that I was able to translate (SpringQueue) and then scale into a production-grade system (SpringQueuePro). For that, I am very proud of this project.

Hosting

About

A very basic lightweight, concurrent job queue system built in Go (Golang), featuring retry logic, customizable job types, RESTful API endpoints, and a simple React/TypeScript dashboard for monitoring and control. Meant to mimic the functionality of real-life Job Queue services e.g., Celery. At its core, it implements the Producer-Consumer Model.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published