A distributed task queue built on S3. No databases, no brokers—just a bucket.
Works with any S3-compatible storage: AWS S3, Cloudflare R2, MinIO, Tigris, Backblaze B2, DigitalOcean Spaces, and more.
cargo install buquet
# Configure your bucket
export S3_BUCKET=my-queue-bucket
export S3_REGION=us-east-1
# Run a worker
buquet worker --shards 0,1,2,3
# Submit a task
buquet submit -t my_task -i '{"foo": "bar"}'# Start LocalStack
docker compose up -d
# Point to LocalStack
export S3_ENDPOINT=http://localhost:4566
export S3_BUCKET=buquet-dev
buquet worker --shards 0,1,2,3- Getting Started — Setup and configuration
- Python API — Python bindings reference
- Examples — Working Python and Rust examples
- Feature Roadmap — Completed and planned features
- Workflows — Durable workflow orchestration
- S3 is the control plane. No databases, no brokers - just a bucket.
- Durable. Tasks survive crashes. Workers recover automatically via lease expiry.
- Observable. Every state transition is persisted: full audit trail by design.
- Latency is seconds, not milliseconds. Designed for background work, not real-time.
MIT