An autonomous shader studio — a local LLM writes WGSL/GLSL art shaders,
naga+ 6-axis LLM review gatekeep them, and a human curator (you) approves the final gallery.
Research prototype, currently dormant. Phase 3 (WGSL quality lift) paused mid-flight while the author focuses on another project. Code, gallery (55 shaders), and design docs are preserved as-is. Issues welcome for reproduction questions; PRs deprioritized until the project resumes.
Standalone WGSL art shaders barely exist in the wild — most WGSL corpora are engine internals (Bevy PBR, wgpu examples, Web-RTRT pipelines). shader-forge was an experiment to see whether a local 80B code model plus a curated RAG could produce something closer to Shadertoy-quality output, but in the WebGPU generation.
It's also a concrete reference for fully local, autonomous creative loops: no OpenAI call, no hosted inference, no external review API. Everything runs on one box.
SCOUT → Tavily API pulls trends → LLM proposes 3–5 shader themes
FORGE → RAG selects references → Qwen3-Coder-Next 80B writes WGSL/GLSL
→ naga-cli validates → LLM reviews on 6 axes (30 pts total)
→ on failure, feedback-driven retry (max 5×)
RENDER → headless WebGPU (Chrome + Vulkan) → thumbnail.png + preview.mp4
STAGE → FastAPI WebUI on :3333, dual WebGL/WebGPU live preview
PUBLISH → approved shaders drop into inbox/ for downstream pickup
| Layer | Tech |
|---|---|
| Inference | llama.cpp / llama-server on localhost:8080 |
| Model | Qwen3-Coder-Next 80B-A3B (Q4_K_XL, ~47GB) |
| Shader compiler | naga-cli (WGSL → SPIR-V), glslang (GLSL ES) |
| RAG | ChromaDB + all-MiniLM-L6-v2 |
| Headless render | wgpu-py + Vulkan, Puppeteer + Chrome headless |
| Review WebUI | FastAPI + Three.js (WebGL) / raw WebGPU |
Designed for NVIDIA Project DIGITS (GB10 / ARM64), but any CUDA + Vulkan host with enough VRAM for the model should work.
- 514-shader curated WGSL corpus from wgpu/Bevy/compute.toys/approved art + 23 shabon-fx GLSL references → 96.1% passed
nagavalidation let→varauto-fix for the most frequent WGSL-vs-GLSL translation error the LLM makes- 6-axis review rubric (composition / motion / color / novelty / code quality / concept fit) — JSON-parsed, feedback fed back into the retry loop
- Best score shipped: 24/25 (
vibrant-plasma-energy-vortex) - Temperature split: creative director prompt runs at 0.7, the coder stays at 0.0 — separation of design intent and faithful implementation
55 generated shaders live in gallery/, each folder carrying:
shader.frag(GLSL) orshader.wgslmetadata.json— theme, style hints, references used, review statusthumbnail.png— 1080×1080 render
Browse locally via the Stage WebUI:
cd stage && python -m server # http://localhost:3333# 1. Environment
cp .env.example .env
# edit .env: set TAVILY_API_KEY and point LLM_BASE_URL at your llama-server
# 2. Python deps
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# 3. naga-cli
cargo install naga-cli
# 4. Start your local llama-server (example)
llama-server -m /path/to/Qwen3-Coder-Next-UD-Q4_K_XL.gguf \
--port 8080 --host 0.0.0.0 -ngl 99 --ctx-size 16384 --flash-attn on
# 5. Generate a WGSL shader
python forge/cli.py --theme "aurora borealis" --target wgsl
# Output lands in gallery/YYYY-MM-DD_aurora-borealis/
# 6. Review WebUI
python stage/server.py # open http://localhost:3333context/— the RAG corpus (third-party Shadertoy/Bevy/wgpu code). Regenerate viaforge/fetch_shadertoy.pyandforge/extract_repos.py.data/chromadb/,data/qlora_output/,data/training/— reproducible build artifacts.Web-RTRT/,WebGPU-Fluid-Simulation/,WebGPU-Lab/,webgpu-demo/,webgpu-native-examples/— upstream reference projects; clone them separately if you want those samples available to the ingest pipeline.- Model weights — download from Hugging Face.
DESIGN.md— v1 (WebGL1 / GLSL ES 1.00) architectureDESIGN_v2.md— v2 migration to WebGPU / WGSLCHANGELOG.md— running log of experiments and key findingsWEBGPU_SHADER_REPOS.md— survey of WGSL corpora
- @wrennly_dev — updates
- shabon-fx — 24 curated GLSL shaders that seeded the RAG corpus
- react-shadertoy — the renderer used for live preview
ローカル LLM だけでシェーダー制作パイプラインを自律化する実験プロジェクト。trend スカウト → RAG 参照選択 → WGSL/GLSL 生成 → naga バリデーション → 6 軸 LLM レビュー → headless レンダリング → 人間キュレーションまで、外部 API を (Tavily を除き) 使わずに完結させた。
WebGPU 移行 (Phase 3) の途中で別プロジェクトに集中するため休眠中だが、standalone な WGSL アートシェーダーという空白領域 に踏み込んだ記録として、設計書・コード・55作品のギャラリーを残している。
