Summary
Add an SVG output renderer to simular alongside the existing Canvas 2D (WASM) and TUI renderers. SVGs follow the Grid Protocol specification (16×9 grid, 1920×1080, element IDs on every <g>) for direct consumption by rmedia's native SVG producer in the resolve-pipeline video rendering system.
This enables course videos where students watch actual simulations execute — not static diagrams of algorithms, but the algorithms themselves running in real-time with physics-accurate motion.
Motivation
simular currently renders to:
- Canvas 2D (WASM) — interactive web demos at interactive.paiml.com
- TUI (ratatui) — terminal visualization
- Export (JSON/CSV/Parquet/Video frames) — data analysis
None of these produce SVG. The resolve-pipeline video system now has a native SVG rendering path (rmedia type="svg" producer via resvg) that eliminates ImageMagick and ffmpeg entirely. SVGs with element IDs can be animated per-frame via keyframe interpolation (paiml/rmedia#7).
The missing link: simular computes physically accurate trajectories but can't output them as renderable SVGs. Adding SVG output means:
- Course videos show real simulations — orbit demos, Monte Carlo convergence, gradient descent paths
- Zero manual animation — simular computes positions, SVG renderer emits frames, rmedia encodes
- Deterministic — same seed produces bit-identical SVG sequences across platforms
- Vector quality — resvg renders at any resolution without rasterization artifacts
Design
SVG Renderer trait
Extend the existing RenderCommand pattern (src/orbit/render.rs) with an SVG backend:
pub struct SvgRenderer {
width: u32, // 1920
height: u32, // 1080
grid: GridProtocol, // 16×9, 120px cells
elements: Vec<SvgElement>,
frame: usize,
}
impl Renderer for SvgRenderer {
fn clear(&mut self, color: Rgba) { ... }
fn draw_circle(&mut self, id: &str, cx: f64, cy: f64, r: f64, color: Rgba) { ... }
fn draw_line(&mut self, id: &str, x1: f64, y1: f64, x2: f64, y2: f64, color: Rgba) { ... }
fn draw_rect(&mut self, id: &str, x: f64, y: f64, w: f64, h: f64, color: Rgba) { ... }
fn draw_text(&mut self, id: &str, x: f64, y: f64, text: &str, size: f64, color: Rgba) { ... }
fn draw_path(&mut self, id: &str, points: &[(f64, f64)], color: Rgba) { ... }
fn finish_frame(&mut self) -> String { /* emit SVG string */ }
}
Every draw call takes an id parameter that becomes the SVG element's id attribute — enabling rmedia to target individual elements for animation overrides.
Output modes
Mode 1: Frame sequence (one SVG per simulation step)
output/frame_0001.svg
output/frame_0002.svg
...
output/frame_0180.svg
Each SVG is a complete Grid Protocol document with manifest comment. rmedia consumes them as sequential SVG producers in a playlist:
<producer id="frame_1" type="svg"><property name="resource">frame_0001.svg</property></producer>
<producer id="frame_2" type="svg"><property name="resource">frame_0002.svg</property></producer>
<playlist id="simulation">
<entry producer="frame_1" in="0" out="0"/>
<entry producer="frame_2" in="0" out="0"/>
...
</playlist>
Mode 2: Template + keyframes (one SVG + animation data)
Emit a single SVG with all elements at their initial positions, plus a keyframes JSON file:
output/scene.svg # Grid Protocol SVG with element IDs
output/keyframes.json # Per-frame property values
{
"fps": 60,
"duration_frames": 180,
"elements": {
"ball": {
"cy": [100.0, 104.2, 116.8, 137.8, ...],
"r": [40.0, 40.0, 40.0, ..., 32.0, ...]
},
"shadow": {
"opacity": [0.1, 0.12, 0.18, ...]
},
"energy-text": {
"text": ["-3.54e-01", "-3.54e-01", ...]
}
}
}
resolve-pipeline reads the keyframes and emits animate.* properties on the SVG producer (paiml/rmedia#7). This is more efficient than frame-per-SVG for simple animations where only a few attributes change per frame.
Mode 3: Hybrid (static background SVG + animated overlay keyframes)
For complex scenes with both static and dynamic elements:
output/background.svg # Static grid layout (panels, labels, axes)
output/overlay.svg # Dynamic elements only (balls, particles, cursors)
output/keyframes.json # Animation data for overlay elements
rmedia composites the static background with the animated overlay via multitrack.
Grid Protocol compliance
The SVG renderer outputs valid Grid Protocol SVGs:
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1920 1080" width="1920" height="1080">
<!-- GRID PROTOCOL MANIFEST
Canvas: 1920×1080 | Grid: 16×9 | Cell: 120px
Step 1: "title-bar" (0,0)→(15,0) 0,0 → 1920,120
Step 2: "sim-canvas" (1,1)→(11,7) 120,120 → 1440,960
Step 3: "metrics" (12,1)→(15,7) 1440,120 → 1920,960
Step 4: "status-bar" (0,8)→(15,8) 0,960 → 1920,1080
TOTAL: 132/144 cells | 0 overlaps ✓
-->
<g id="title-bar">...</g>
<g id="sim-canvas">
<g id="ball">...</g>
<g id="trail">...</g>
<g id="grid-lines">...</g>
</g>
<g id="metrics">
<g id="energy-display">...</g>
<g id="time-display">...</g>
<g id="jidoka-status">...</g>
</g>
<g id="status-bar">...</g>
</svg>
- Dark palette from Grid Protocol (canvas
#0f172a, panels #1e293b)
- 18px minimum font size
- 4.5:1 WCAG AA contrast
- Manifest comment with cell allocation proof
Coordinate mapping
simular simulations use domain-specific units (AU, meters, normalized [0,1]). The SVG renderer maps these to pixel coordinates within the allocated grid cells:
impl SvgRenderer {
/// Map simulation coordinates to SVG pixel coordinates
fn map_coords(&self, sim_x: f64, sim_y: f64) -> (f64, f64) {
let canvas = &self.grid.regions["sim-canvas"]; // e.g., 120,120 → 1440,960
let px = canvas.x + (sim_x - self.bounds.x_min) / self.bounds.x_range * canvas.width;
let py = canvas.y + (sim_y - self.bounds.y_min) / self.bounds.y_range * canvas.height;
(px, py)
}
}
Per-Domain SVG Visualizations
Orbit simulation
- Sun (yellow circle, center), Earth (blue circle, animated position)
- Orbit trail (polyline/path, growing over time)
- Grid circles (1 AU, 2 AU reference orbits)
- Metrics panel: energy drift, angular momentum, simulated time
- Jidoka status indicator (green/yellow/red)
Monte Carlo pi estimation
- Unit square with inscribed circle (grid-aligned)
- Points appearing one-by-one (or in batches): red outside, blue inside
- Running pi estimate as growing text counter
- Convergence chart: estimated pi vs samples (line growing rightward)
- Confidence interval band narrowing over time
Optimization (Bayesian/GRASP)
- 2D contour plot of objective function (static background)
- Evaluated points appearing as circles (color = objective value)
- Acquisition function overlay (updating per iteration)
- Next evaluation point highlighted (pulsing animation)
- Current best marked with star
Gradient descent
- Loss landscape as contour/heatmap (static background SVG)
- Parameter point moving along computed trajectory
- Trail showing optimization path
- Loss value text updating per step
- Learning rate indicator
- Convergence region highlighted
ML training loop
- Loss curve: y-axis = loss, x-axis = epoch, line drawing itself leftward
- Accuracy bars growing upward per epoch
- Early stopping indicator (vertical line when triggered)
- Gradient norm panel
- Jidoka anomaly markers (NaN detection, gradient explosion)
Integration with resolve-pipeline
Pipeline flow
simular --svg-output scene.svg --keyframes keyframes.json --domain orbit --seed 42
↓
resolve-pipeline reads keyframes, generates MLT XML with animate.* properties
↓
rmedia renders SVG frames with interpolated positions → H.264 video
Course config integration
-- config/db_mlops_c3.lua
lesson = {
title = "Monte Carlo Methods",
svg = "explode-projects/db-mlops-c3/monte-carlo.svg",
simulation = {
engine = "simular",
domain = "monte_carlo",
seed = 42,
params = { samples = 10000, batch_size = 100 },
output = "keyframes", -- mode 2: template + keyframes
},
}
CLI interface
# Generate SVG frame sequence for orbit simulation
simular render --domain orbit \
--format svg-frames \
--output /tmp/orbit-frames/ \
--fps 60 --duration 10.0 \
--width 1920 --height 1080 \
--seed 42
# Generate template SVG + keyframes JSON
simular render --domain monte-carlo \
--format svg-keyframes \
--output /tmp/monte-carlo/ \
--fps 60 --duration 15.0 \
--params samples=10000,batch_size=100 \
--seed 42
Priority
- SVG renderer trait — platform-agnostic draw commands → SVG string
- Orbit domain SVG — port existing Canvas renderer to SVG output
- Keyframes JSON export — template + animation data format
- Monte Carlo SVG — points appearing, convergence chart
- Optimization SVG — contour plot + trajectory
- Grid Protocol manifest generation — automated cell allocation proof
References
- SVG Grid Protocol: resolve-pipeline
docs/specifications/svg-grid-protocol.md
- rmedia SVG producer: paiml/rmedia#6 (element control)
- rmedia keyframe interpolation: paiml/rmedia#7
- Existing render commands:
src/orbit/render.rs (RenderCommand enum)
- Existing Canvas renderer:
src/demos/orbit_wasm_app.rs
- resolve-pipeline native SVG path:
scripts/rmedia_text_lib.lua
Summary
Add an SVG output renderer to simular alongside the existing Canvas 2D (WASM) and TUI renderers. SVGs follow the Grid Protocol specification (16×9 grid, 1920×1080, element IDs on every
<g>) for direct consumption by rmedia's native SVG producer in the resolve-pipeline video rendering system.This enables course videos where students watch actual simulations execute — not static diagrams of algorithms, but the algorithms themselves running in real-time with physics-accurate motion.
Motivation
simular currently renders to:
None of these produce SVG. The resolve-pipeline video system now has a native SVG rendering path (rmedia
type="svg"producer via resvg) that eliminates ImageMagick and ffmpeg entirely. SVGs with element IDs can be animated per-frame via keyframe interpolation (paiml/rmedia#7).The missing link: simular computes physically accurate trajectories but can't output them as renderable SVGs. Adding SVG output means:
Design
SVG Renderer trait
Extend the existing
RenderCommandpattern (src/orbit/render.rs) with an SVG backend:Every draw call takes an
idparameter that becomes the SVG element'sidattribute — enabling rmedia to target individual elements for animation overrides.Output modes
Mode 1: Frame sequence (one SVG per simulation step)
Each SVG is a complete Grid Protocol document with manifest comment. rmedia consumes them as sequential SVG producers in a playlist:
Mode 2: Template + keyframes (one SVG + animation data)
Emit a single SVG with all elements at their initial positions, plus a keyframes JSON file:
{ "fps": 60, "duration_frames": 180, "elements": { "ball": { "cy": [100.0, 104.2, 116.8, 137.8, ...], "r": [40.0, 40.0, 40.0, ..., 32.0, ...] }, "shadow": { "opacity": [0.1, 0.12, 0.18, ...] }, "energy-text": { "text": ["-3.54e-01", "-3.54e-01", ...] } } }resolve-pipeline reads the keyframes and emits
animate.*properties on the SVG producer (paiml/rmedia#7). This is more efficient than frame-per-SVG for simple animations where only a few attributes change per frame.Mode 3: Hybrid (static background SVG + animated overlay keyframes)
For complex scenes with both static and dynamic elements:
rmedia composites the static background with the animated overlay via multitrack.
Grid Protocol compliance
The SVG renderer outputs valid Grid Protocol SVGs:
#0f172a, panels#1e293b)Coordinate mapping
simular simulations use domain-specific units (AU, meters, normalized [0,1]). The SVG renderer maps these to pixel coordinates within the allocated grid cells:
Per-Domain SVG Visualizations
Orbit simulation
Monte Carlo pi estimation
Optimization (Bayesian/GRASP)
Gradient descent
ML training loop
Integration with resolve-pipeline
Pipeline flow
Course config integration
CLI interface
Priority
References
docs/specifications/svg-grid-protocol.mdsrc/orbit/render.rs(RenderCommand enum)src/demos/orbit_wasm_app.rsscripts/rmedia_text_lib.lua