feat: MarkdownNode, frame render strategy, export optimization, embed API, AI model updates#626
feat: MarkdownNode, frame render strategy, export optimization, embed API, AI model updates#626softmarshmallow merged 15 commits intomainfrom
Conversation
…oved performance - Added a new `FrameRenderStrategy` struct to centralize per-frame rendering policy decisions, reducing code duplication and improving maintainability. - Implemented logic to compute rendering strategies based on zoom level, GPU availability, and other conditions, enhancing rendering efficiency. - Updated the `Renderer` to utilize the new strategy, streamlining decision-making processes during frame rendering. - Documented the new strategy and its usage in the optimization guide, highlighting the benefits of eager rendering at high zoom levels.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
WalkthroughThis PR adds a MarkdownEmbed node (schema, IO, layout, painter, editor, DOM), centralizes per-frame rendering policy via FrameRenderStrategy, implements per-node image export and export benchmarking, and migrates AI rate-limiting from credits to a budget (mills) model; it also extends the embed protocol with export/node-path APIs and updates docs/tools for AI models. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Editor as Editor UI
participant EmbedSDK as Embed SDK
participant EmbedBridge as Embed Bridge
participant WasmScene as WASM Scene
User->>Editor: Click "Export Node"
Editor->>EmbedSDK: exportNode(nodeId, format)
EmbedSDK->>EmbedSDK: generateRequestId()
EmbedSDK->>EmbedBridge: postMessage({type: "grida:export", requestId, nodeId, format})
EmbedBridge->>WasmScene: exportNodeAs(nodeId, format)
WasmScene->>WasmScene: export_node_image(subtree)
WasmScene-->>EmbedBridge: ArrayBuffer (transferable)
EmbedBridge->>EmbedSDK: postMessage({type: "grida:export-result", data, requestId}, [data])
EmbedSDK->>Editor: resolve Promise with data
Editor->>Editor: download(Blob)
sequenceDiagram
participant Renderer
participant FrameStrategy
participant SceneCache
participant ImageCache
participant Compositor
Renderer->>FrameStrategy: compute(zoom, is_gpu, stable, camera_change, config, has_pan_cache, has_zoom_cache)
FrameStrategy->>FrameStrategy: derive policy flags
FrameStrategy-->>Renderer: FrameRenderStrategy
alt use_image_caches = true
Renderer->>ImageCache: query pan/zoom fast paths
ImageCache-->>Renderer: cached image / miss
else
Renderer->>SceneCache: full scene rasterization
SceneCache-->>Renderer: raster result
end
alt use_compositor = true
Renderer->>Compositor: update layer composition
Compositor-->>Renderer: composite result
else
Renderer->>SceneCache: raster fallback
end
Renderer->>Renderer: decide can_defer_plan based on strategy
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labelscanvas, markdown, export, ai-models, budget-system, embed, rendering, documentation 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
#627) * feat(canvas): unify MarkdownEmbed to use htmlcss render pipeline Replace the standalone 877-line markdown→Skia renderer (painter/markdown.rs) with the shared htmlcss pipeline: markdown is converted to HTML via pulldown-cmark, wrapped with grida-markdown.css, and rendered through Stylo CSS → Taffy layout → Skia paint — the same path as HTMLEmbedNode. Key changes: - Add `markdown_to_styled_html()` that converts GFM markdown to a self-contained HTML document with embedded CSS - Add `grida-markdown.css` fixture (element-selector-only stylesheet targeting pulldown-cmark's bare HTML output) - Add faux-table layout module: emulates CSS table via Taffy flex (table→flex-column, tr→flex-row, td→equal-width flex children) - Auto-height on resize: MarkdownEmbed and HTMLEmbed nodes re-measure content height when width changes in grida-dev - Measure content height at load time instead of using 100,000px - Rename MarkdownNodeRec → MarkdownEmbedNodeRec to align with HTMLEmbedNodeRec naming pattern - Delete painter/markdown.rs (877 lines removed) * fmt * feat(canvas): add MarkdownEmbedNode with FBS schema, editor commands, and .md file drops Introduces a first-class MarkdownEmbed node type that renders GFM markdown directly on the canvas via the existing Skia-backed markdown painter. Prior to this, the AI tool make_from_markdown fell back to creating a plain TextSpanNode, so styled markdown never actually made it to the renderer. - format/grida.fbs: add MarkdownEmbedNode + MarkdownEmbedNodeProperties tables, append to Node union and NodeType enum - packages/grida-canvas-schema: add MarkdownEmbedNode interface and wire into LayerNode / ComputedNode / NodePrototype unions - crates/grida-canvas: JSON bridge + FBS decoder for MarkdownEmbedNode - packages/grida-canvas-io: TS FBS encoder for markdown_embed - editor: createMarkdownEmbedNode command, initial-node factory case, DOM renderer background fill mapping - ai/tools/canvas-use: make_from_markdown now creates MarkdownEmbedNode - data-transfer: insertMarkdown handler + .md file drop/paste support (text/markdown MIME type added to ValidFileType) * fix(schema): add markdown_embed case to prototype-to-node factory create_packed_scene_document_from_prototype threw "Unsupported node prototype type: markdown_embed" when the AI tool or .md file drop tried to insert a markdown embed node. Adds the case alongside the other simple scalar prototypes (iframe, richtext, tspan, etc.). * fix(cg): add MarkdownEmbed arm to tool_io_grida example classifier CI cargo build with --examples failed with non-exhaustive match in examples/tool_io_grida.rs classify_json_node after MarkdownEmbed variant was added to JSONNode. * fix(markdown-embed): address code review — close encode/decode gaps and edge cases Addresses CodeRabbit review findings on PR #627. Critical (prevents data loss on save/reopen): - Add Rust FBS encoder (encode_markdown_embed_node) so Node::MarkdownEmbed no longer falls through to the UnknownNode fallback in encode_node. - Add TS FBS decoder (nodeTypes.markdownEmbed) + fromFlatbuffer match arm so MarkdownEmbedNode round-trips through @grida/io without being reconstructed as a group. DOM renderer safety: - Add MarkdownEmbedWidget as a simple DOM fallback (monospace pre-wrap block) and wire markdown_embed into NodeElement's renderer switch. The canonical render path is still canvas/Skia — this just prevents "Unknown node type" in DOM-rendered subtrees (previews, tests). .md file detection: - filetype() now lowercases MIME + extension and checks extension FIRST, so .md files still resolve as text/markdown when browsers serve them as text/plain, application/octet-stream, or with empty MIME. Also accepts .MD, .markdown, .mdown, .mkd extensions. - File name trimming on drop uses a case-insensitive regex to strip trailing .md/.markdown/.mdown/.mkd instead of a brittle split(".md"). * fmt ---------
ViewportCull::from_plan allocated and populated a VisibilitySet bitset of every visible leaf id on every full-draw frame. At fit-zoom on large scenes (135K-node fixture, dashboards, pasteboards) every leaf is visible, so the bitset is O(N) work whose answer is always "true". Change the cull to carry an Option<VisibilitySet>: None represents the "all visible" case and is_leaf_visible short-circuits to true. Detect the case by comparing live region count + promoted count against the LayerList length (regions and promoted are disjoint — see the region/promoted partitioning in draw_layers_with_scene_cache_skip). Criterion (CPU raster, bench_viewport_culling): 50k/all_visible 24.678ms -> 24.077ms -2.44% (p=0.00) 5k/all_visible 3.881ms -> 3.818ms -1.76% (p=0.00) partial/corner/zoomed_in/empty: no regression (p > 0.16) The fast path only fires when every leaf is visible; culled frames pay one extra Option match branch per leaf which the benchmarks show is below noise. Savings scale linearly with node count, targeting the view-only fit-zoom path on large scenes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Introduced `exportNode` method to export a single node as an image, PDF, or SVG, returning the raw bytes or null on failure. - Added `getNodeIdPath` method to retrieve the structural ancestry path of a node, returning an array of node IDs or null if the node does not exist. - Updated embed protocol to include new command types for export and node ID path retrieval, along with corresponding event responses. - Enhanced Embed Debug Page UI to support export options and display node ID paths, improving user interaction and functionality.
perf(canvas): skip ViewportCull bitset build when all leaves visible
…rties - Updated MarkdownEmbedNode to use explicit width and height properties instead of a fixed size, allowing for auto-sizing based on content. - Refactored related functions and structures to accommodate the new properties, ensuring proper measurement and rendering of markdown content. - Introduced a new measurement context for markdown embeds to facilitate dynamic sizing during layout. - Adjusted the factory and scene graph to support the new properties, improving the overall handling of markdown nodes in the canvas.
…te related functions - Renamed `MarkdownEmbedNode` to `MarkdownNode` across the codebase for consistency and clarity. - Updated the `createMarkdownEmbedNode` function to `createMarkdownNode` and adjusted its implementation to reflect the new naming. - Modified related rendering and processing logic to accommodate the new node type, ensuring proper handling of markdown content. - Enhanced layout properties for markdown nodes, allowing for improved sizing and rendering behavior.
…h tooling Image exports now render only the target subtree using the live renderer's warm SceneCache instead of cloning the entire scene and rebuilding all caches from scratch. On a 136K-node scene, leaf node export drops from ~560ms to <0.1ms (native), eliminating the O(N) load_scene overhead. - Add Renderer::export_node_image() — renders subtree via LayerList::from_node with the warm cache (O(subtree) cost) - Wire ApplicationApi::export_node_as to use the new path for all image formats (PNG/JPEG/WEBP/BMP); PDF/SVG retain the legacy path - Make ExportSize::apply_constraints public - Document opacity semantics: ancestor opacity is not propagated on export (node renders in isolation, matching Figma/Sketch behavior) - Add grida-dev export-bench subcommand with per-stage timing, baseline vs optimized A/B comparison, and pixel-exact diffing - Add Criterion bench_export benchmark (synthetic grid/nested scenes)
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e96db2251c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const result = surface.exportNodeAs( | ||
| cmd.nodeId, | ||
| toWasmExportAs(cmd.format) |
There was a problem hiding this comment.
Map host IDs back before embed node lookups
When __dangerously_transform_node_id is enabled (used by the Figma embed), outgoing events expose transformed IDs, but this command path forwards cmd.nodeId directly into WASM lookup APIs. For cloned/synthetic Figma nodes, the transformed ID often does not exist in the scene graph, so grida:export (and the analogous grida:get-node-id-path branch) returns null even for a selected node. Add a reverse mapping step for incoming node IDs before calling exportNodeAs/getNodeIdPath so host-visible IDs remain round-trippable.
Useful? React with 👍 / 👎.
| let (local_transform, width, height) = if let Some(l) = layout { | ||
| let width = l.width.max(MIN_SIZE); | ||
| let height = l.height.max(MIN_SIZE); |
There was a problem hiding this comment.
Treat zero markdown layout as unresolved in skip-layout mode
This branch trusts any layout result for markdown nodes, but skip_layout mode computes schema-only sizes where auto-sized markdown stores width/height as 0. As a result, auto markdown nodes are clamped to ~1px instead of being measured from content, so they effectively collapse whenever runtime_renderer_set_skip_layout(true) is used. The markdown path should detect this schema-only zero case and fall back to content measurement (or schema width defaults) rather than accepting 0 as a resolved layout.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
crates/grida-canvas/src/export/mod.rs (1)
56-75:⚠️ Potential issue | 🟡 MinorGuard zero-size aspect-ratio math in public
apply_constraints.Line 68 and Line 71 can produce
inf/NaNwhenself.width == 0.0orself.height == 0.0. Now that this helper is public, this edge case should be handled defensively.Proposed fix
impl ExportSize { pub fn apply_constraints(&self, constraints: &ExportConstraints) -> Self { match constraints { ExportConstraints::None => Self { width: self.width, height: self.height, }, ExportConstraints::Scale(scale) => Self { width: self.width * scale, height: self.height * scale, }, - ExportConstraints::ScaleToWidth(width) => Self { - width: *width as f32, - height: self.height * *width as f32 / self.width, - }, - ExportConstraints::ScaleToHeight(height) => Self { - width: self.width * *height as f32 / self.height, - height: *height as f32, - }, + ExportConstraints::ScaleToWidth(width) => { + let target_w = *width as f32; + let target_h = if self.width > 0.0 { + self.height * target_w / self.width + } else { + 0.0 + }; + Self { + width: target_w, + height: target_h, + } + } + ExportConstraints::ScaleToHeight(height) => { + let target_h = *height as f32; + let target_w = if self.height > 0.0 { + self.width * target_h / self.height + } else { + 0.0 + }; + Self { + width: target_w, + height: target_h, + } + } } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/export/mod.rs` around lines 56 - 75, apply_constraints can produce NaN/inf when self.width or self.height is zero; update ExportConstraints::ScaleToWidth and ExportConstraints::ScaleToHeight branches in apply_constraints to guard the division by zero: in ScaleToWidth, if self.width == 0.0 set the computed height to 0.0 (or another safe fallback) else compute height = self.height * *width as f32 / self.width; in ScaleToHeight, if self.height == 0.0 set the computed width to 0.0 else compute width = self.width * *height as f32 / self.height; keep other branches unchanged and ensure consistent f32 casts.editor/grida-canvas/reducers/node-transform.reducer.ts (1)
197-216:⚠️ Potential issue | 🟠 MajorKeep the
resizepath in sync for markdown nodes.
scalenow treats"markdown"as content-driven, but theresizebranch below still special-cases only"tspan". Any flow that dispatchestype: "resize"will still materialize a fixed height for auto-height markdown nodes and reintroduce the clipping this change is trying to avoid.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas/reducers/node-transform.reducer.ts` around lines 197 - 216, The resize branch still special-cases only "tspan" causing markdown nodes to get a fixed height; update the logic that checks draft.type in the resize handling so that it treats "markdown" the same as "tspan" (i.e., use the same isContentDriven or include "markdown" in the conditional) so auto-height markdown nodes preserve their ceil(Math.max(scaled.height,0)) behavior and avoid materializing a fixed height; adjust any uses of preserveAutoHeight/layout_target_height in the resize path (referencing draft.type, isContentDriven, preserveAutoHeight, and layout_target_height) to keep the resize and scale branches in sync.editor/app/(api)/private/ai/image/actions.ts (1)
22-43:⚠️ Potential issue | 🟠 Major
validateAuthAndRateLimit()now charges before the work succeeds.Both image actions call this before the external model invocation. Any provider-side failure burns budget and returns
500, which is hard to unwind for the user. Keep the auth check here, but move budget settlement after success or add reservation/refund semantics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@editor/app/`(api)/private/ai/image/actions.ts around lines 22 - 43, The validateAuthAndRateLimit function currently deducts budget via ai_budget_deduct(cost_mills) before the external model call, which wastes budget on provider failures; modify validateAuthAndRateLimit (and its callers) to only perform auth checks using createLibraryClient() and client.auth.getUser() and to perform a non-destructive availability check instead of ai_budget_deduct (e.g., ai_budget_check or a new isAffordable(cost_mills) call). Then move the actual budget settlement to after the external model succeeds by calling ai_budget_deduct(cost_mills) in the image action functions that invoke the provider (or implement reservation semantics: call ai_budget_reserve(cost_mills) before the provider call and ai_budget_confirm(cost_mills) on success / ai_budget_refund(cost_mills) on failure), updating callers that currently call validateAuthAndRateLimit to reflect the new auth-only behavior.crates/grida-dev/src/main.rs (1)
401-417:⚠️ Potential issue | 🟠 MajorWire markdown import into
load_scenes_from_source()as well.
scene_from_markdown_embed_path()is only reachable fromload_master_scenes_from_path().run_interactive(file)and every bench command still useload_scenes_from_source(), which falls through togrida_file::decode_all()for.mdinputs and will reject raw markdown.Minimal follow-up outside this hunk
match ext.to_ascii_lowercase().as_str() { "svg" => return scene_from_svg_path(path).map(|s| vec![s]), "html" | "htm" => return scene_from_html_path(path).map(|s| vec![s]), + "md" | "markdown" => { + return scene_from_markdown_embed_path(path).map(|s| vec![s]) + } // Raster images should be loaded via load_raster() by the caller // so bytes can be registered with the renderer. ext if is_raster_ext(ext) => return load_raster(path).map(|r| vec![r.scene]),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-dev/src/main.rs` around lines 401 - 417, The loader for raw markdown is only implemented in scene_from_markdown_embed_path but load_scenes_from_source still falls through to grida_file::decode_all and rejects .md inputs; update load_scenes_from_source to handle ".md" (or content-type markdown) by delegating to scene_from_markdown_embed_path (or by inlining the same logic: read file, create NodeFactory::create_markdown_embed_node, set node.markdown/width/height, push into SceneGraph) so run_interactive, benches, and other callers that use load_scenes_from_source will accept raw markdown; ensure you reference scene_from_markdown_embed_path and NodeFactory::create_markdown_embed_node (and preserve behavior used by load_master_scenes_from_path).editor/grida-canvas-react/use-data-transfer.ts (1)
428-447:⚠️ Potential issue | 🟠 MajorGive
text/markdownpriority over generic text pastes.The loop returns on the first matching item. On clipboards that expose both
text/plainandtext/markdown, the plain-text item can win first, so the new markdown path never runs and the paste still creates a text node.🔧 Suggested fix
+ const markdownFile = items.find( + (item) => item.type === "text/markdown" + ); + if (markdownFile?.type === "text/markdown") { + insertFromFile(markdownFile.type, markdownFile.file, position); + return true; + } + // 3. Handle SVG text, images, or plain text for (const item of items) { try { switch (item.type) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas-react/use-data-transfer.ts` around lines 428 - 447, The loop currently returns on the first matching item and the "text" case can win over "text/markdown"; to fix, ensure "text/markdown" is checked before the generic "text" case so markdown clipboard entries get priority: reorder the switch/case (or add an explicit check for item.type === "text/markdown" before the "text" branch) and call insertFromFile(item.type, item.file, position) for markdown, leaving insertText(item.text, position) for plain "text".packages/grida-canvas-io/index.ts (1)
298-323:⚠️ Potential issue | 🟠 MajorPreserve image extension fallback for MIME-less files.
typecan still become"svg","png","jpg", etc. whenfile.typeis empty, but the rest of the function only matches MIME strings. That means extension-only inputs are now accepted for Markdown but still rejected for every supported image format.♻️ Suggested normalization
- const type = file.type || ext || file.name; + const type = + mime || + ({ + svg: "image/svg+xml", + png: "image/png", + jpg: "image/jpeg", + jpeg: "image/jpeg", + gif: "image/gif", + webp: "image/webp", + } as const)[ext] || + file.name;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/grida-canvas-io/index.ts` around lines 298 - 323, The current accept logic sets type = file.type || ext || file.name and then only matches full MIME strings for images, which rejects files when file.type is empty but ext is e.g. "png" or "svg"; update the matching to normalize extension-only types before the image checks: after computing ext/mime/type (variables ext, mime, type) map common extensions ("svg","png","jpg","jpeg","gif","webp") to their MIME equivalents (e.g. "png" -> "image/png", "svg" -> "image/svg+xml", "jpg"/"jpeg" -> "image/jpeg") or compare the image checks against both ext and MIME (so the image checks in the if/else chain accept either the MIME string or the plain extension) to ensure extension-only files are accepted the same as proper MIME-reported files.
🟡 Minor comments (9)
.agents/skills/ai-models/SKILL.md-35-35 (1)
35-35:⚠️ Potential issue | 🟡 MinorDocumentation describes a symlink that doesn't exist.
The documentation states this is a "symlink to
.tools/model_info.py", but the actual file.agents/skills/ai-models/scripts/model_info.pyis a text file containing a path string, not a symbolic link. Either create an actual symlink or update this description.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/ai-models/SKILL.md at line 35, Update the SKILL.md description to match reality: either change the text that claims "symlink to .tools/model_info.py" to explain that scripts/model_info.py is a plain text file containing a path string, or replace the plain text file at scripts/model_info.py with an actual symbolic link pointing to .tools/model_info.py; locate the entry about "Script: `.agents/skills/ai-models/scripts/model_info.py`" in SKILL.md and make the documentation and the file state consistent (choose one: create the symlink or update the doc)..agents/skills/ai-models/SKILL.md-82-91 (1)
82-91:⚠️ Potential issue | 🟡 MinorAdd language specifier to the fenced code block.
The code block showing pricing types is missing a language specifier, which affects syntax highlighting and accessibility.
📝 Suggested fix
-``` +```text per_image_tiered — quality x size tiers (e.g. OpenAI) { type: "per_image_tiered", tiers: { "medium/1024x1024": 0.034, ... } } per_image_flat — single price per image (e.g. BFL Flux) { type: "per_image_flat", usd: 0.06 } per_token — charged by token (e.g. Google Gemini) { type: "per_token", input: 0.5, output: 3.0 }</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.agents/skills/ai-models/SKILL.md around lines 82 - 91, Add a language
specifier to the fenced code block that documents the pricing types (the block
containing "per_image_tiered", "per_image_flat", "per_token") so it uses a plain
text specifier (e.g., ```text) to ensure proper syntax highlighting and
accessibility; update the opening fence accordingly while leaving the block
contents unchanged.</details> </blockquote></details> <details> <summary>crates/grida-canvas/src/painter/painter.rs-106-116 (1)</summary><blockquote> `106-116`: _⚠️ Potential issue_ | _🟡 Minor_ **The fast-path all-visible guard needs bounds validation.** Line 111 counts raw region indices without checking if they're valid in `layers.layers`. The slow path at line 124 uses `filter_map(|&idx| layers.layers.get(idx)...)` which silently drops out-of-bounds indices. In compositing-disabled mode (lines 2202–2212 in scene.rs), region indices are never validated before being pushed, so the count can incorrectly exceed the actual visible-leaf count, triggering the all-visible fast path and disabling culling. Change the guard at line 111 from `>=` to `==` and add bounds validation: <details> <summary>Suggested fix</summary> ```diff let live_count: usize = plan.regions.iter().map(|(_, idx)| idx.len()).sum(); let visible_count = live_count + plan.promoted.len(); - if visible_count >= layers.layers.len() { + let all_region_indices_in_bounds = plan + .regions + .iter() + .all(|(_, indices)| indices.iter().all(|&idx| idx < layers.layers.len())); + if all_region_indices_in_bounds && visible_count == layers.layers.len() { return Self { visible_leaves: None, viewport: plan.viewport, }; } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/painter/painter.rs` around lines 106 - 116, The all-visible fast-path is using an unsafe count: change the comparison from visible_count >= layers.layers.len() to visible_count == layers.layers.len() and, when computing live_count, only count region indices that are in-bounds of layers.layers (e.g. replace plan.regions.iter().map(|(_, idx)| idx.len()).sum() with a sum that filters each index list by idx < layers.layers.len()); ensure promoted entries are also validated against layers.layers.len() before adding to visible_count so the equality check accurately reflects actual visible leaves and preserves culling in Self { visible_leaves: None, viewport: plan.viewport }. ``` </details> </blockquote></details> <details> <summary>docs/wg/feat-2d/optimization.md-879-883 (1)</summary><blockquote> `879-883`: _⚠️ Potential issue_ | _🟡 Minor_ **Add a blank line before the table to satisfy markdownlint MD058.** The table starting at Line 880 is not separated from the preceding paragraph by a blank line. <details> <summary>Proposed fix</summary> ```diff Measured on 01-135k.perf.grida (135K nodes): + | Scenario | Before p95 | After p95 | Before MAX | After MAX | |---|---|---|---|---| | zoom_slow_around_fit | 54,062 µs | 6 µs | 60,282 µs | 119 µs | ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/wg/feat-2d/optimization.md` around lines 879 - 883, The markdown table starting at the row with "| Scenario | Before p95 | After p95 | Before MAX | After MAX |" needs a preceding blank line to satisfy markdownlint rule MD058; edit the docs/wg/feat-2d/optimization.md content so there is an empty line immediately before that table header (i.e., insert a single newline between the previous paragraph and the line containing the table header) and verify the table still renders correctly. ``` </details> </blockquote></details> <details> <summary>editor/grida-canvas-react-renderer-dom/nodes/markdown-embed.tsx-24-32 (1)</summary><blockquote> `24-32`: _⚠️ Potential issue_ | _🟡 Minor_ **Padding needs `boxSizing: "border-box"`.** If the computed renderer `style` already carries width/height, the extra 12px padding grows the fallback beyond the node bounds and throws off clipping in DOM previews/tests. <details> <summary>Proposed fix</summary> ```diff style={{ ...style, whiteSpace: "pre-wrap", fontFamily: "ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace", fontSize: 13, padding: 12, + boxSizing: "border-box", overflow: "hidden", }} ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas-react-renderer-dom/nodes/markdown-embed.tsx` around lines 24 - 32, The inline style for the Markdown embed component merges a padding of 12px without box-sizing, which causes padding to expand the computed size and break clipping; update the merged style object in markdown-embed.tsx (the style prop where whiteSpace/fontFamily/fontSize/padding are set) to include boxSizing: "border-box" so the padding is included inside the element’s width/height and prevents overflow in DOM previews/tests. ``` </details> </blockquote></details> <details> <summary>editor/app/(www)/(ai)/ai/models/page.tsx-55-60 (1)</summary><blockquote> `55-60`: _⚠️ Potential issue_ | _🟡 Minor_ **Preserve vendor brand casing in the section headers.** `vendorLabel("openai")` renders `Openai`, which is a public branding typo on this page. Use explicit display labels for known vendors and keep the slug formatter only as a fallback. <details> <summary>💡 Suggested fix</summary> ```diff +const VendorLabels: Partial<Record<string, string>> = { + "black-forest-labs": "Black Forest Labs", + openai: "OpenAI", + anthropic: "Anthropic", + google: "Google", +}; + function vendorLabel(vendor: string) { - return vendor - .split("-") - .map((w) => w.charAt(0).toUpperCase() + w.slice(1)) - .join(" "); + return ( + VendorLabels[vendor] ?? + vendor + .split("-") + .map((w) => w.charAt(0).toUpperCase() + w.slice(1)) + .join(" ") + ); } ``` </details> Also applies to: 303-305 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/app/`(www)/(ai)/ai/models/page.tsx around lines 55 - 60, The vendorLabel function currently title-cases the slug (e.g., "openai" => "Openai"); change it to first check an explicit mapping of known vendor slugs to their correct display names (e.g., "openai" -> "OpenAI", "anthropic" -> "Anthropic", etc.), returning the mapped label when present, and only use the existing slug formatter (split/map/join) as a fallback for unknown vendors; update vendorLabel to consult that mapping and return the correct branding for all known vendors referenced in the UI. ``` </details> </blockquote></details> <details> <summary>editor/app/(www)/(ai)/ai/models/page.tsx-62-70 (1)</summary><blockquote> `62-70`: _⚠️ Potential issue_ | _🟡 Minor_ **Treat an empty `sizes` array as missing data.** Both branches only test truthiness, so `sizes: []` produces a blank dimensions value in the comparison table and `0 presets` in the card instead of falling back to the max-size/flexible copy. <details> <summary>💡 Suggested fix</summary> ```diff function dimLabel(model: AITypes.image.ImageModelCard) { - if (model.sizes) { + if (model.sizes?.length) { return model.sizes.map(([w, h, r]) => `${w}x${h} (${r})`).join(", "); } if (model.max_width > 0) { return `Up to ${model.max_width}x${model.max_height}`; } return "Flexible"; } ... - {model.sizes - ? `${model.sizes.length} presets` + {model.sizes?.length + ? `${model.sizes?.length} presets` : model.max_width > 0 ? `up to ${model.max_width}x${model.max_height}` : "Flexible"} ``` </details> Also applies to: 203-210 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/app/`(www)/(ai)/ai/models/page.tsx around lines 62 - 70, The dimLabel function treats an empty sizes array as present because it checks truthiness; change the check to explicitly require a non-empty array (e.g., model.sizes && model.sizes.length > 0 or model.sizes?.length > 0) so empty [] falls through to the max_width/flexible branches; apply the same explicit non-empty-array check to any other code in this file that currently tests model.sizes by truthiness (search for other uses of model.sizes around the second block mentioned) so empty sizes are treated as missing data consistently. ``` </details> </blockquote></details> <details> <summary>docs/canvas/sdk/embed/index.md-459-466 (1)</summary><blockquote> `459-466`: _⚠️ Potential issue_ | _🟡 Minor_ **Document `pong.sceneId` as optional.** The embed can start empty, and the SDK/protocol types already model `sceneId` as `string | undefined`. Advertising it as an always-present `string` here will mislead consumers. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/canvas/sdk/embed/index.md` around lines 459 - 466, Update the grida:pong payload documentation to mark pong.sceneId as optional (e.g., `sceneId?: string`) to match the SDK/protocol types (which model sceneId as `string | undefined`); locate the grida:pong row in the message table and change the payload description to indicate the field may be absent (keep the rest of the payload and wording intact) so consumers are not misled about always receiving a sceneId. ``` </details> </blockquote></details> <details> <summary>editor/app/(embed)/embed/v1/debug/page.tsx-65-77 (1)</summary><blockquote> `65-77`: _⚠️ Potential issue_ | _🟡 Minor_ **Reset in-flight export/path state when the iframe context changes.** These flags clear only on matching replies. If the iframe reloads or a new document is opened while a request is in flight, the old request ID survives and the corresponding control stays disabled forever because that response will never arrive. Also applies to: 118-149 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/app/`(embed)/embed/v1/debug/page.tsx around lines 65 - 77, Reset the in-flight request tracking and related UI flags whenever the iframe/document context changes: in the effect or event handler that detects iframe reloads/new document (the same place that adds postMessage listeners), clear pendingExportRef.current and pendingNodeIdPathRef.current, increment or reset nextRequestIdRef.current (e.g., ++ to invalidate outstanding IDs), and setExporting(false) and setNodeIdPathQuerying(false) (and optionally setLastNodeIdPath(null)) so stale replies cannot keep controls disabled; apply the same reset logic where similar request handling occurs (the block around lines 118-149). ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (10)</summary><blockquote> <details> <summary>.tools/model_info.py (3)</summary><blockquote> `178-182`: **Minor: `seen` set appears redundant.** Since `groups` is a dict and `groups.items()` already yields unique keys, the `seen` set check is unnecessary. However, it's harmless defensive coding. <details> <summary>♻️ Optional simplification</summary> ```diff def print_summary(groups: dict[str, list[dict]], *, image_mode: bool) -> None: """Print a compact summary table.""" print(f"\n{'─' * 60}") if image_mode: print(" Summary (for lib/ai/ai.ts):") else: print(" Summary (for lib/ai/models.ts):") print(f"{'─' * 60}") - seen: set[str] = set() for base_id, entries in groups.items(): - if base_id in seen: - continue - seen.add(base_id) primary = pick_primary(entries) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.tools/model_info.py around lines 178 - 182, The `seen` set is redundant because `groups.items()` yields unique keys; remove the `seen: set[str] = set()` declaration and the `if base_id in seen: continue` / `seen.add(base_id)` lines, simply iterate with `for base_id, entries in groups.items():`; ensure no other code in the same scope references `seen` before deleting it. ``` </details> --- `43-51`: **Consider handling JSON parsing errors gracefully.** If the API returns invalid JSON or an error response, `json.loads` will raise an exception with a potentially confusing message. A try/except with a clearer error message would improve debuggability. <details> <summary>♻️ Suggested improvement</summary> ```diff def fetch_catalog() -> dict: """Fetch the models.dev catalog via curl (avoids Python SSL issues on macOS).""" result = subprocess.run( ["curl", "-s", API_URL], capture_output=True, text=True, check=True, ) - return json.loads(result.stdout) + try: + return json.loads(result.stdout) + except json.JSONDecodeError as e: + print(f"Failed to parse API response: {e}", file=sys.stderr) + sys.exit(1) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.tools/model_info.py around lines 43 - 51, The fetch_catalog function currently calls json.loads(result.stdout) without handling JSON parse errors; wrap the parsing in a try/except around json.loads inside fetch_catalog to catch JSONDecodeError (from json) and raise or log a clearer, contextual error that includes the API_URL, HTTP stdout/stderr (or result.stdout snippet) and the underlying exception message so callers can diagnose invalid or unexpected API responses; ensure subprocess.run still raises on curl failure (check=True) and only handle parsing errors here (refer to fetch_catalog and the result variable). ``` </details> --- `200-218`: **Consider using `argparse` for cleaner argument handling.** The manual argument parsing works, but `argparse` would provide better error messages, help text generation, and handle edge cases (like `--image=value` or unknown flags) more robustly. This is optional given the script's simplicity. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.tools/model_info.py around lines 200 - 218, Replace the manual argv parsing in main() (variables image_only, show_all, positional) with argparse: create an ArgumentParser, add a boolean --image flag (store_true), a boolean --all flag (store_true), and a positional optional model_id argument, then parse_args() and use the parsed values instead of image_only/show_all/positional; update the usage/error handling to rely on parser.print_help()/parser.error so edge cases like --image=value or unknown flags are handled automatically and remove the manual sys.exit(1) usage. ``` </details> </blockquote></details> <details> <summary>editor/grida-canvas/reducers/tools/initial-node.ts (1)</summary><blockquote> `190-204`: **Consider centralizing markdown default seed values.** `initialNode("markdown")` and `createMarkdownNode()` now carry separate defaults; extracting shared defaults reduces drift across creation paths. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas/reducers/tools/initial-node.ts` around lines 190 - 204, Extract the shared default values used by initialNode("markdown") and createMarkdownNode() into a single exported constant (e.g., DEFAULT_MARKDOWN_SEED) and have both functions spread that constant instead of duplicating literals; include properties such as corner_radius, layout_target_width, layout_target_height, fill/fill_paints logic, markdown content, and any seed merges, ensure the merged result still satisfies grida.program.nodes.MarkdownNode in initial-node.ts and that createMarkdownNode() imports/uses the same constant so defaults remain consistent across creation paths. ``` </details> </blockquote></details> <details> <summary>crates/grida-canvas/src/layout/engine.rs (1)</summary><blockquote> `2063-2064`: **Please add a regression test for the auto-size markdown branch.** The updated test covers explicit `Some(width/height)` only; add at least one case with `width=None` or `height=None` to lock in measured-layout behavior. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/layout/engine.rs` around lines 2063 - 2064, Add a regression test that exercises the auto-size markdown branch by creating a markdown layout input where one of the dimensions is left unset (e.g., set md.width = None or md.height = None) instead of both being Some(...), then run the same layout/measure function used by existing tests (look for the measured-layout/markdown test harness or functions around md.width/md.height usage) and assert that the resulting computed size matches the expected measured dimensions (not default/explicit sizes); ensure the test mirrors the existing explicit-size case but replaces one dimension with None to verify measured-layout behavior is preserved. ``` </details> </blockquote></details> <details> <summary>editor/grida-canvas-hosted/ai/tools/canvas-use.ts (1)</summary><blockquote> `127-133`: **Prefer a shared default image-model constant.** `platform_sys_tool_ai_image_model_cards` already reads the supported set from `ai.image.models`, but this fallback is now a separate literal. Pulling the default from the same config avoids silent drift between the advertised cards and the runtime default. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas-hosted/ai/tools/canvas-use.ts` around lines 127 - 133, The execute function currently uses a hardcoded default model_id ("openai/gpt-image-1-mini"); replace that literal with the shared default used by the image-model cards to avoid drift. Import or reference the same constant/source used by platform_sys_tool_ai_image_model_cards (the ai.image.models default/config value) and use that identifier as the default for the execute parameter (function execute) so both the cards and runtime share one canonical default. ``` </details> </blockquote></details> <details> <summary>packages/grida-canvas-schema/grida.ts (1)</summary><blockquote> `2387-2395`: **Avoid exposing rectangular stroke-width fields without stroke support.** `MarkdownNode` mixes in `i.IRectangularShapeTrait`, but it does not implement `i.IStroke`. That means the public schema advertises `rectangular_stroke_width_*` knobs that this node cannot fully render. Either narrow this to corner-radius traits only, or add full stroke support if borders are intended. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/grida-canvas-schema/grida.ts` around lines 2387 - 2395, The MarkdownNode interface currently includes i.IRectangularShapeTrait but does not implement stroke support, which exposes rectangular_stroke_width_* knobs incorrectly; either remove i.IRectangularShapeTrait from the MarkdownNode extends list and keep only i.ICornerRadius (so only corner-radius knobs are advertised), or add full stroke support by also implementing i.IStroke and wiring up the stroke-related properties/serialization/handling so rectangular_stroke_width_* knobs are valid; update the MarkdownNode declaration accordingly (reference: MarkdownNode, i.IRectangularShapeTrait, i.ICornerRadius, i.IStroke, rectangular_stroke_width_*) and ensure the chosen change is reflected wherever MarkdownNode instances are created or consumed. ``` </details> </blockquote></details> <details> <summary>crates/grida-canvas/src/cache/geometry.rs (1)</summary><blockquote> `713-762`: **Consider extracting the default width constant.** The implementation correctly follows the `TextSpan` pattern for layout resolution, including: - MIN_SIZE clamping to prevent zero-sized nodes - Parent container context awareness for transform selection - Fallback measurement path when layout result is missing The hardcoded default width `400.0` on line 734 could be extracted to a named constant for clarity and maintainability. <details> <summary>🔧 Suggested refactor</summary> ```diff GeoNodeKind::MarkdownEmbed => { let layout = layout_result.and_then(|r| r.get(id)); const MIN_SIZE: f32 = 1.0; + const DEFAULT_MARKDOWN_WIDTH: f32 = 400.0; let parent_is_layout_container = parent_id .as_ref() .and_then(|pid| is_layout_container.get(pid).copied()) .unwrap_or(false); let (local_transform, width, height) = if let Some(l) = layout { // ... } else { // Fallback: measure markdown content height when layout is missing. if let Ok(Node::MarkdownEmbed(n)) = graph.get_node(id) { - let width = n.width.unwrap_or(400.0).max(MIN_SIZE); + let width = n.width.unwrap_or(DEFAULT_MARKDOWN_WIDTH).max(MIN_SIZE); ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/cache/geometry.rs` around lines 713 - 762, The hardcoded default width 400.0 used in the GeoNodeKind::MarkdownEmbed fallback should be extracted to a named constant (e.g., DEFAULT_MARKDOWN_WIDTH) alongside MIN_SIZE to improve clarity and maintainability; update the code in the fallback branch of the markdown measurement (where n.width.unwrap_or(400.0).max(MIN_SIZE) is used) to use DEFAULT_MARKDOWN_WIDTH and ensure the .max(MIN_SIZE) semantics remain unchanged so markdown width falls back to the constant and is still clamped by MIN_SIZE. ``` </details> </blockquote></details> <details> <summary>editor/grida-canvas/embed-protocol.ts (1)</summary><blockquote> `128-136`: **Reuse the export union for `export-result.format`.** Typing this as `string` throws away exhaustiveness for every consumer and is already forcing downstream casts/fallback maps. `EmbedExportAs["format"]` keeps the response aligned with the request side. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas/embed-protocol.ts` around lines 128 - 136, The export-result message interface currently types the format field as string which loses exhaustiveness; change its type to EmbedExportAs["format"] so consumers can rely on the same union as the request side. Locate the message shape for type "grida:export-result" (fields requestId, data, format) and replace the format: string declaration with format: EmbedExportAs["format"], ensuring EmbedExportAs is imported/available in this module. ``` </details> </blockquote></details> <details> <summary>editor/lib/ai/ai.ts (1)</summary><blockquote> `758-766`: **Tighten the legacy bare-name lookup.** The comment says this is an “ends with” fallback, but the implementation accepts any unique substring via `includes()`. That makes the resolver less stable than intended and can silently pick the wrong card as more model IDs are added. Matching the segment after the `/` would be safer here. As per coding guidelines, `editor/lib/**/*.{ts,tsx}` modules should be non-opinionated, reusable, and stable to be worthy of promotion to `<root>/packages` directory. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@editor/lib/ai/ai.ts` around lines 758 - 766, The bare-name lookup currently uses includes() and can match any substring; change it to match only the final segment after a '/' to avoid accidental unique-substring matches: when modelId does not include '/', compute the suffix to match (e.g., expectedSuffix = '/' + modelId or compare c.id.split('/').pop()) and filter Object.values(ai.image.models) by c!.id.endsWith(expectedSuffix) or c!.id.split('/').pop() === modelId, then keep the existing single-result assignment to card. ``` </details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Organization UI **Review profile**: CHILL **Plan**: Pro **Run ID**: `9ad28c76-353c-41de-96a9-510403233ce2` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 319c1757e2461af4e78acc196d1296055095fc16 and e96db2251c85ec1ee3212006bf0eefbb9cb38ad7. </details> <details> <summary>⛔ Files ignored due to path filters (2)</summary> * `crates/grida-canvas-wasm/lib/bin/grida_canvas_wasm.wasm` is excluded by `!**/*.wasm` * `crates/grida-canvas/src/io/generated/grida.rs` is excluded by `!**/generated/**` </details> <details> <summary>📒 Files selected for processing (60)</summary> * `.agents/skills/ai-models/SKILL.md` * `.agents/skills/ai-models/scripts/model_info.py` * `.tools/model_info.py` * `crates/grida-canvas-wasm/lib/bin/grida-canvas-wasm.js` * `crates/grida-canvas-wasm/package.json` * `crates/grida-canvas/Cargo.toml` * `crates/grida-canvas/benches/bench_export.rs` * `crates/grida-canvas/examples/tool_io_grida.rs` * `crates/grida-canvas/src/cache/geometry.rs` * `crates/grida-canvas/src/export/mod.rs` * `crates/grida-canvas/src/io/id_converter.rs` * `crates/grida-canvas/src/io/io_grida.rs` * `crates/grida-canvas/src/io/io_grida_fbs.rs` * `crates/grida-canvas/src/layout/engine.rs` * `crates/grida-canvas/src/layout/into_taffy.rs` * `crates/grida-canvas/src/layout/tree.rs` * `crates/grida-canvas/src/node/factory.rs` * `crates/grida-canvas/src/node/scene_graph.rs` * `crates/grida-canvas/src/node/schema.rs` * `crates/grida-canvas/src/painter/geometry.rs` * `crates/grida-canvas/src/painter/layer.rs` * `crates/grida-canvas/src/painter/painter.rs` * `crates/grida-canvas/src/runtime/frame_strategy.rs` * `crates/grida-canvas/src/runtime/mod.rs` * `crates/grida-canvas/src/runtime/scene.rs` * `crates/grida-canvas/src/window/application.rs` * `crates/grida-dev/src/bench/export_bench.rs` * `crates/grida-dev/src/bench/mod.rs` * `crates/grida-dev/src/editor/mutation.rs` * `crates/grida-dev/src/main.rs` * `docs/canvas/sdk/embed/index.md` * `docs/models/index.md` * `docs/wg/feat-2d/optimization.md` * `editor/app/(api)/private/ai/credits/route.ts` * `editor/app/(api)/private/ai/generate/image/route.ts` * `editor/app/(api)/private/ai/image/actions.ts` * `editor/app/(api)/private/ai/ratelimit.ts` * `editor/app/(embed)/embed/v1/debug/page.tsx` * `editor/app/(tools)/(playground)/playground/image/_page.tsx` * `editor/app/(www)/(ai)/ai/models/page.tsx` * `editor/grida-canvas-hosted/ai/tools/canvas-use.ts` * `editor/grida-canvas-react-renderer-dom/nodes/index.ts` * `editor/grida-canvas-react-renderer-dom/nodes/markdown-embed.tsx` * `editor/grida-canvas-react-renderer-dom/nodes/node.tsx` * `editor/grida-canvas-react-starter-kit/starterkit-icons/node-type-icon.tsx` * `editor/grida-canvas-react/use-data-transfer.ts` * `editor/grida-canvas/editor.i.ts` * `editor/grida-canvas/editor.ts` * `editor/grida-canvas/embed-bridge.ts` * `editor/grida-canvas/embed-protocol.ts` * `editor/grida-canvas/reducers/node-transform.reducer.ts` * `editor/grida-canvas/reducers/tools/initial-node.ts` * `editor/lib/ai/ai.ts` * `editor/lib/ai/hooks/use-credits.ts` * `editor/lib/ai/models.ts` * `editor/scaffolds/embed/grida-embed.ts` * `format/grida.fbs` * `packages/grida-canvas-io/format.ts` * `packages/grida-canvas-io/index.ts` * `packages/grida-canvas-schema/grida.ts` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| fn decode_markdown_embed_node( | ||
| lc: &LayerCommon, | ||
| layer: &fbs::LayerTrait<'_>, | ||
| mn: &fbs::MarkdownEmbedNode<'_>, | ||
| ) -> Node { | ||
| let sl = decode_shape_layout(layer, lc.rotation_cos_sin); | ||
| let props = mn.properties(); | ||
|
|
||
| let markdown = props | ||
| .as_ref() | ||
| .and_then(|p| p.markdown()) | ||
| .unwrap_or("") | ||
| .to_owned(); | ||
|
|
||
| let fills = props | ||
| .as_ref() | ||
| .map(|p| decode_paints_vec(p.fill_paints())) | ||
| .unwrap_or_else(|| Paints::new(Vec::<Paint>::new())); | ||
|
|
||
| let corner_radius = decode_corner_radius(props.as_ref().and_then(|p| p.corner_radius())); | ||
|
|
||
| Node::MarkdownEmbed(MarkdownEmbedNodeRec { | ||
| active: lc.active, | ||
| opacity: lc.opacity, | ||
| blend_mode: lc.blend_mode, | ||
| effects: lc.effects.clone(), | ||
| mask: lc.mask, | ||
| transform: sl.transform, | ||
| width: sl.width, | ||
| height: sl.height, | ||
| corner_radius, | ||
| markdown, | ||
| fills, | ||
| layout_child: lc.layout_child.clone(), | ||
| }) | ||
| } |
There was a problem hiding this comment.
Decode and encode disagree on markdown-embed transform semantics.
The decoder stores the box-centered decode_shape_layout() transform, but the encoder later serializes r.transform.x()/y() as if the transform were a top-left translation. Rotated markdown embeds will drift on decode→encode round-trips until both sides follow the same convention. If markdown embeds are meant to stay shape-like, this file already has reverse_from_box_center() you can reuse on the encode side; otherwise the decode side should mirror the text-node path instead.
Also applies to: 4499-4557
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/io/io_grida_fbs.rs` around lines 2054 - 2089, The
decode/encode mismatch for MarkdownEmbed transform comes from
decode_markdown_embed_node using decode_shape_layout (which stores a
box-centered transform) while the encoder treats r.transform.x()/y() as a
top-left translation; fix by making the encoder mirror the decoder's
box-centered convention: when serializing MarkdownEmbedNode transforms, apply
reverse_from_box_center() (already defined in this module) to the Node's
transform before writing r.transform.x()/y(), or alternatively modify
decode_markdown_embed_node to follow the same top-left semantics as text nodes
(i.e., use the same path as the text-node decode); update the transform handling
consistently for MarkdownEmbedNode encode/decode pairs (references:
decode_markdown_embed_node, decode_shape_layout, reverse_from_box_center, and
the encoder logic that reads r.transform.x()/y()).
| Node::MarkdownEmbed(n) => { | ||
| // Only use measurement context when auto-sizing is needed | ||
| if n.width.is_none() || n.height.is_none() { | ||
| let styled_html = crate::htmlcss::markdown_to_styled_html(&n.markdown); | ||
| let ctx = MarkdownMeasureContext { | ||
| styled_html, | ||
| width: n.width, | ||
| height: n.height, | ||
| }; | ||
| self.tree.new_markdown_leaf(*node_id, style, ctx).ok() | ||
| } else { | ||
| self.tree.new_leaf(*node_id, style).ok() | ||
| } | ||
| } |
There was a problem hiding this comment.
Auto-sized markdown can still collapse under non-flex parents.
This measurement path only runs when the markdown node is in the Taffy tree. In this file, descendants of non-flex containers can bypass Taffy subtree construction, and fallback extraction does not include markdown intrinsic measurement, which can leave None axes effectively as zero-size.
⚙️ Suggested direction
- if !lc.is_flex {
- return self.tree.new_leaf(*node_id, style).ok();
- }
+ if !lc.is_flex {
+ // Keep non-flex container as leaf for parent layout,
+ // but still build intrinsic-measure children as independent roots.
+ for child_id in children {
+ if let Ok(child) = graph.get_node(child_id) {
+ let needs_intrinsic_measure = match child {
+ Node::TextSpan(t) => t.width.is_none() || t.height.is_none(),
+ Node::AttributedText(t) => t.width.is_none() || t.height.is_none(),
+ Node::MarkdownEmbed(m) => m.width.is_none() || m.height.is_none(),
+ _ => false,
+ };
+ if needs_intrinsic_measure {
+ if let Some(taffy_id) = self.build_taffy_subtree(
+ child_id,
+ graph,
+ viewport_size,
+ extra_roots,
+ ) {
+ extra_roots.push(taffy_id);
+ }
+ }
+ }
+ }
+ return self.tree.new_leaf(*node_id, style).ok();
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/layout/engine.rs` around lines 428 - 441, The
MarkdownEmbed branch only creates a MarkdownMeasureContext when the node is
placed into the Taffy tree (Node::MarkdownEmbed -> self.tree.new_markdown_leaf),
but descendants that bypass Taffy subtree construction can fall back without
intrinsic markdown measurement and collapse None axes to zero; update the
fallback path so when a MarkdownEmbed has width.is_none() or height.is_none()
you still produce/attach intrinsic markdown measurement data: call
crate::htmlcss::markdown_to_styled_html(&n.markdown) and construct a
MarkdownMeasureContext (same shape as used for self.tree.new_markdown_leaf) and
use it in the non-Taffy/fallback leaf creation or measurement routine so
intrinsic markdown size is used even when new_markdown_leaf isn't invoked (i.e.,
ensure the logic that currently calls self.tree.new_leaf uses the same markdown
measurement context for intrinsic sizing).
| } else if let Some(LayoutNodeContext::Markdown(ctx)) = node_context { | ||
| let width = known_dimensions.width.or(ctx.width).unwrap_or(400.0); // fallback width for measurement | ||
|
|
||
| let measured_height = crate::htmlcss::measure_content_height( | ||
| &ctx.styled_html, | ||
| width, | ||
| provider.fonts, | ||
| ) | ||
| .unwrap_or(0.0); | ||
|
|
||
| let height = known_dimensions | ||
| .height | ||
| .or(ctx.height) | ||
| .unwrap_or(measured_height); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n crates/grida-canvas/src/layout/tree.rs | sed -n '250,310p'Repository: gridaco/grida
Length of output: 3110
🏁 Script executed:
cat -n crates/grida-canvas/src/layout/tree.rs | sed -n '200,250p'Repository: gridaco/grida
Length of output: 2631
🏁 Script executed:
cat -n crates/grida-canvas/src/layout/tree.rs | sed -n '1,50p'Repository: gridaco/grida
Length of output: 1911
🏁 Script executed:
# Check the measure_content_height function signature
rg -A 5 "fn measure_content_height" crates/grida-canvas/src/Repository: gridaco/grida
Length of output: 460
🏁 Script executed:
# Check if available_space is used elsewhere in measure callbacks
rg "available_space" crates/grida-canvas/src/layout/ -B 2 -A 2Repository: gridaco/grida
Length of output: 1357
🏁 Script executed:
# Search for Taffy measure callback patterns in the codebase
rg "compute_layout_with_measure" crates/grida-canvas/src/ -B 3 -A 10Repository: gridaco/grida
Length of output: 3526
🏁 Script executed:
# Check the text_measure_func implementation to see how available_space is used
rg -B 5 -A 15 "fn text_measure_func" crates/grida-canvas/src/htmlcss/Repository: gridaco/grida
Length of output: 1602
🏁 Script executed:
# Check if there are any width constraint considerations in available_space handling
rg "AvailableSpace" crates/grida-canvas/src/htmlcss/layout.rs -B 2 -A 2Repository: gridaco/grida
Length of output: 1178
🏁 Script executed:
# Get the complete text_measure_func implementation
rg -A 40 "fn text_measure_func" crates/grida-canvas/src/htmlcss/layout.rsRepository: gridaco/grida
Length of output: 1628
🏁 Script executed:
# Check if Text/AttributedText measurement in tree.rs should also use available_space
rg -B 5 -A 20 "LayoutNodeContext::Text\(ctx\)" crates/grida-canvas/src/layout/tree.rs | head -40Repository: gridaco/grida
Length of output: 1324
🏁 Script executed:
# Get the complete text_measure_func to see how it uses available_space
rg -A 50 "let layout_width = match available_space" crates/grida-canvas/src/htmlcss/layout.rsRepository: gridaco/grida
Length of output: 2035
🏁 Script executed:
# Check what ctx.width represents for Text/AttributedText/Markdown contexts
rg -B 5 "struct TextMeasureContext\|struct AttributedTextMeasureContext\|struct MarkdownMeasureContext" crates/grida-canvas/src/layout/tree.rs -A 10Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check how these contexts are populated/created
rg "TextMeasureContext\|AttributedTextMeasureContext\|MarkdownMeasureContext" crates/grida-canvas/src/ --type rust -B 2 -A 2 | grep -E "new\(|width:|height:" | head -30Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Get the full definitions of the measure context structs from the file
sed -n '10,60p' crates/grida-canvas/src/layout/tree.rsRepository: gridaco/grida
Length of output: 1585
🏁 Script executed:
# Search for where MarkdownMeasureContext is created
rg "MarkdownMeasureContext \{" crates/grida-canvas/src/ -B 3 -A 5Repository: gridaco/grida
Length of output: 1548
🏁 Script executed:
# Check if there's a distinction between node's own width vs parent's available width
rg "ctx.width\|available_space" crates/grida-canvas/src/layout/tree.rs -B 2 -A 2Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if Text/AttributedText in tree.rs should also incorporate available_space like htmlcss does
rg -B 10 -A 30 "let width_constraint = known_dimensions.width.or\(ctx.width\)" crates/grida-canvas/src/layout/tree.rs | head -60Repository: gridaco/grida
Length of output: 3060
🏁 Script executed:
# Understand the semantics: what does ctx.width represent vs available_space
grep -n "width: n.width" crates/grida-canvas/src/layout/engine.rs -B 5 -A 2Repository: gridaco/grida
Length of output: 1290
Consult available_space.width in the measure callback instead of falling back to hardcoded 400px.
The measure callback receives _available_space but ignores it. When the markdown node has no explicit width and sits inside a constrained parent, the callback should use the parent's available width (via available_space) before defaulting to 400px. This matches the pattern used in htmlcss/layout.rs (line 95: match available_space.width { AvailableSpace::Definite(w) => ... }), which correctly handles parent constraints. Without this, markdown height measurement will be incorrect in flex and container layouts.
Note: This issue also affects Text and AttributedText measurements, which ignore available_space.width and rely solely on known_dimensions.width.or(ctx.width).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/layout/tree.rs` around lines 273 - 286, The markdown
measurement currently ignores the measure callback's available_space and
hardcodes a 400px fallback; update the measurement in the
LayoutNodeContext::Markdown branch to consult available_space.width first
(matching the pattern in htmlcss/layout.rs) before falling back to ctx.width or
a final default, i.e. derive width from
known_dimensions.width.or(ctx.width).or(available_space.width when Definite) and
then pass that width to crate::htmlcss::measure_content_height (also apply the
same available_space.width-aware logic for Text and AttributedText measurement
paths that use known_dimensions and ctx).
| /// Explicit width in pixels, or `None` for auto (content-driven). | ||
| pub width: Option<f32>, | ||
| /// Explicit height in pixels, or `None` for auto (content-driven). | ||
| pub height: Option<f32>, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== FlatBuffers schema definitions (Markdown / width / height / size) =="
fd -i '^grida\.fbs$' | while read -r f; do
echo "--- $f ---"
rg -n "Markdown|markdown|width|height|size" "$f" -C2 || true
done
echo
echo "== Rust round-trip tests =="
fd -i '^fbs_roundtrip\.rs$' | while read -r f; do
echo "--- $f ---"
rg -n "Markdown|markdown|MarkdownEmbed|width|height|size" "$f" -C3 || true
done
echo
echo "== TypeScript round-trip tests =="
fd -i '^format-roundtrip\.test\.ts$' | while read -r f; do
echo "--- $f ---"
rg -n "Markdown|markdown|MarkdownEmbed|width|height|size" "$f" -C3 || true
doneRepository: gridaco/grida
Length of output: 50369
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if MarkdownEmbed tests exist in Rust ==="
fd -i '^fbs_roundtrip\.rs$' -x wc -l {} \;
fd -i '^fbs_roundtrip\.rs$' -x rg -c 'markdown_embed|MarkdownEmbed' {} || echo "No Markdown matches found"
echo
echo "=== Check if MarkdownEmbed tests exist in TypeScript ==="
fd -i '^format-roundtrip\.test\.ts$' -x wc -l {} \;
fd -i '^format-roundtrip\.test\.ts$' -x rg -c 'markdown_embed|MarkdownEmbed' {} || echo "No Markdown matches found"
echo
echo "=== Verify MarkdownEmbedNode in schema.rs ==="
fd -i '^schema\.rs$' crates/grida-canvas/src/node/ | head -1 | xargs -I {} sh -c 'echo "File: {}"; grep -n "MarkdownEmbed\|struct.*Markdown" {} || echo "No struct definition found"'
echo
echo "=== Check if MarkdownEmbedNode width/height is in FlatBuffers schema ==="
fd -i '^grida\.fbs$' | head -1 | xargs -I {} sh -c 'echo "File: {}"; sed -n "/^table MarkdownEmbedNode/,/^}/p" {} || echo "Table not found"'
echo
echo "=== Search for any Markdown node creation in tests ==="
fd -i '\.(rs|ts)$' crates/grida-canvas/tests packages/grida-canvas-io/__tests__ | xargs rg -l 'MarkdownEmbed|markdown.*embed' 2>/dev/null || echo "No Markdown embed references in test files"Repository: gridaco/grida
Length of output: 1818
Add FlatBuffers schema fields and round-trip tests for MarkdownEmbed width/height.
The runtime model now defines MarkdownEmbedNodeRec with optional width and height fields, but the FlatBuffers schema in format/grida.fbs does not encode these fields, breaking serialization. Additionally, no round-trip tests exist for MarkdownEmbed nodes in either Rust or TypeScript.
Required fixes:
- Update
MarkdownEmbedNodetable informat/grida.fbsto include width and height fields (matching the runtime struct) - Add round-trip test cases in
crates/grida-canvas/tests/fbs_roundtrip.rscovering explicit dimensions and auto (None) dimensions - Add corresponding round-trip test cases in
packages/grida-canvas-io/__tests__/format-roundtrip.test.ts - Validate schema compilation with
flatc(seeformat/README.md)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/node/schema.rs` around lines 2889 - 2892, Add width
and height fields to the FlatBuffers MarkdownEmbedNode table in format/grida.fbs
to match the runtime struct MarkdownEmbedNodeRec (both should be
optional/nullable to represent None/auto), then add round-trip tests: in Rust
add cases to crates/grida-canvas/tests/fbs_roundtrip.rs that construct
MarkdownEmbedNodeRec with explicit width/height and with None values and assert
successful serialize->deserialize equality, and in TypeScript add corresponding
tests in packages/grida-canvas-io/__tests__/format-roundtrip.test.ts that create
the same two scenarios and assert round-trip parity; finally run flatc to
recompile the schema to validate format compilation per format/README.md.
| pub fn export_node_image( | ||
| &self, | ||
| node_id: &NodeId, | ||
| export_rect: math2::Rectangle, | ||
| export_size: (f32, f32), | ||
| ) -> Option<Image> { | ||
| let scene = self.scene.as_ref()?; | ||
| let (out_w, out_h) = export_size; | ||
| let pixel_w = out_w as i32; | ||
| let pixel_h = out_h as i32; | ||
| if pixel_w <= 0 || pixel_h <= 0 { | ||
| return None; | ||
| } | ||
|
|
||
| // Create a temporary raster surface at the target resolution. | ||
| let mut surface = surfaces::raster_n32_premul((pixel_w, pixel_h))?; | ||
| let canvas = surface.canvas(); | ||
|
|
||
| // Set up camera: same logic as export_as_image.rs | ||
| let mut camera = Camera2D::new_from_bounds(export_rect); | ||
| let scale = if export_rect.width > 0.0 { | ||
| out_w / export_rect.width | ||
| } else { | ||
| 1.0 | ||
| }; | ||
| camera.set_size(Size { | ||
| width: out_w, | ||
| height: out_h, | ||
| }); | ||
| camera.set_zoom(scale); | ||
|
|
||
| // Clear and apply camera transform | ||
| canvas.clear(Color::TRANSPARENT); | ||
| canvas.save(); | ||
| canvas.concat(&sk::sk_matrix(camera.view_matrix().matrix)); | ||
|
|
||
| // Build a LayerList for only the target subtree (O(subtree), not O(N)). | ||
| // | ||
| // opacity = 1.0: ancestor opacity is intentionally NOT propagated. | ||
| // The node is exported in isolation — only its own opacity applies. | ||
| // This matches standard design tool export behavior (Figma, Sketch): | ||
| // a rectangle at opacity 1.0 inside a group at opacity 0.5 exports | ||
| // at full opacity, not 50%. | ||
| let cache = &self.scene_cache; | ||
| let subtree_layers = | ||
| crate::painter::layer::LayerList::from_node(node_id, &scene.graph, cache, 1.0); | ||
|
|
||
| // Draw using the existing warm scene cache (pictures, paragraphs, paths). | ||
| let painter = Painter::new_with_scene_cache( | ||
| canvas, | ||
| &self.fonts, | ||
| &self.images, | ||
| cache, | ||
| RenderPolicy::default(), | ||
| ); | ||
| painter.draw_layer_list(&subtree_layers); | ||
|
|
||
| canvas.restore(); | ||
|
|
||
| Some(surface.image_snapshot()) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, understand the file structure and locate the method in question
cd crates/grida-canvas && head -20 src/runtime/scene.rs
wc -l src/runtime/scene.rsRepository: gridaco/grida
Length of output: 818
🏁 Script executed:
# Search for cache synchronization methods (apply_changes, mark_changed, etc.)
rg "apply_changes|mark_changed|mark_node_changed" crates/grida-canvas/src/runtime/scene.rs | head -30Repository: gridaco/grida
Length of output: 1097
🏁 Script executed:
# Look at the export_node_image method and its surrounding context
sed -n '2500,2650p' crates/grida-canvas/src/runtime/scene.rsRepository: gridaco/grida
Length of output: 5753
🏁 Script executed:
# Search for scene_cache references to understand update patterns
rg "scene_cache" crates/grida-canvas/src/runtime/scene.rs -B 2 -A 2 | head -60Repository: gridaco/grida
Length of output: 2283
🏁 Script executed:
# Look for calls to export_node_image to understand usage pattern
rg "export_node_image" crates/grida-canvas/src --type rs -B 3 -A 3Repository: gridaco/grida
Length of output: 83
🏁 Script executed:
# Check if there's any refresh or sync logic in the SceneRenderer or related code
rg "fn apply_changes|fn refresh|fn sync" crates/grida-canvas/src/runtime/ --type rs | head -20Repository: gridaco/grida
Length of output: 83
🏁 Script executed:
# Understand the Scene and SceneCache relationship
rg "scene_cache.*=" crates/grida-canvas/src/runtime/scene.rs -B 1 -A 1Repository: gridaco/grida
Length of output: 321
🏁 Script executed:
# Look at apply_changes implementation to understand what it actually does
sed -n '1800,1900p' crates/grida-canvas/src/runtime/scene.rs | grep -A 50 "pub fn apply_changes"Repository: gridaco/grida
Length of output: 1808
🏁 Script executed:
# Search for when scene_cache is updated/invalidated
rg "scene_cache\s*=" crates/grida-canvas/src/runtime/scene.rs -A 5 -B 2Repository: gridaco/grida
Length of output: 882
🏁 Script executed:
# Look for the full apply_changes implementation
rg "fn apply_changes" crates/grida-canvas/src/runtime/scene.rs -A 100 | head -150Repository: gridaco/grida
Length of output: 4853
🏁 Script executed:
# Check snapshot and render_to_canvas to see if they sync before export
sed -n '2540,2580p' crates/grida-canvas/src/runtime/scene.rsRepository: gridaco/grida
Length of output: 1849
🏁 Script executed:
# Look for callers of export_node_image to understand who uses it
rg "export_node_image" crates/grida-canvas --type-add 'rust:*.rs' -B 5 -A 5 | head -80Repository: gridaco/grida
Length of output: 1827
🏁 Script executed:
# Check the cache module to understand SceneCache structure
find crates/grida-canvas -name "*cache*" -type fRepository: gridaco/grida
Length of output: 482
🏁 Script executed:
# Look at cache/scene.rs to understand how cache is updated
cat crates/grida-canvas/src/cache/scene.rs | head -100Repository: gridaco/grida
Length of output: 3424
🏁 Script executed:
# Look at PictureCache to see if it auto-rebuilds when accessed
rg "impl.*PictureCache|fn get.*picture|fn invalidate" crates/grida-canvas/src/cache/picture.rs -A 5 | head -80Repository: gridaco/grida
Length of output: 1288
🏁 Script executed:
# Check GeometryCache invalidation and access patterns
rg "fn get|invalidate|access" crates/grida-canvas/src/cache/geometry.rs | head -40Repository: gridaco/grida
Length of output: 770
🏁 Script executed:
# Look at LayerList::from_node to see if it validates cache state before reading
rg "impl LayerList|from_node" crates/grida-canvas/src/painter/layer.rs -A 30 | head -100Repository: gridaco/grida
Length of output: 2410
🏁 Script executed:
# Find the main frame/render loop to understand sync timing
rg "fn render|fn frame|apply_changes" crates/grida-canvas/src/runtime/scene.rs | grep -E "(fn (render|frame|apply_changes)|apply_changes)" | head -20Repository: gridaco/grida
Length of output: 975
🏁 Script executed:
# Check the full context of export_node_image call in application.rs
sed -n '1,50p' crates/grida-canvas/src/window/application.rs && rg "export_node_image" crates/grida-canvas/src/window/application.rs -B 30 | tail -50Repository: gridaco/grida
Length of output: 3159
🏁 Script executed:
# See if there's a sync/prepare step before export
rg "export|snapshot|render_to_canvas" crates/grida-canvas/src/window/application.rs -B 10 -A 10 | head -100Repository: gridaco/grida
Length of output: 4091
🏁 Script executed:
# Look for rebuild_scene_caches to understand when it's called
rg "rebuild_scene_caches" crates/grida-canvas/src/runtime/scene.rs -B 3 -A 3Repository: gridaco/grida
Length of output: 1840
🏁 Script executed:
# Look at load_scene to understand cache atomicity claims
rg "fn load_scene" crates/grida-canvas/src/runtime/scene.rs -A 50 | head -80Repository: gridaco/grida
Length of output: 2387
🏁 Script executed:
# Check if apply_changes is called in the frame loop before rendering
rg "frame_loop|render_frame|apply_changes" crates/grida-canvas/src/runtime/frame_loop.rs | head -30Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Look at the full export call path - from user command to export_node_image
sed -n '1,200p' crates/grida-canvas/src/window/application.rs | grep -A 20 "export_node_as_image"Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if there's any comment or documentation about cache consistency guarantees
rg "warm.*cache|always.*consistent|atomic" crates/grida-canvas/src -B 2 -A 2 | head -60Repository: gridaco/grida
Length of output: 2973
🏁 Script executed:
# Look at the complete context of the export_node_as_image docstring
sed -n '800,1000p' crates/grida-canvas/src/window/application.rsRepository: gridaco/grida
Length of output: 7055
🏁 Script executed:
# Check frame_loop to see how apply_changes integrates
cat crates/grida-canvas/src/runtime/frame_loop.rs | head -150Repository: gridaco/grida
Length of output: 5922
🏁 Script executed:
# Check if there's any sync/apply_changes call in the event dispatch or export path
sed -n '100,600p' crates/grida-canvas/src/window/application.rs | grep -B 20 -A 10 "TryCopyAsPNG\|export_node_as"Repository: gridaco/grida
Length of output: 3160
🏁 Script executed:
# Look for the main tick/update function to see if apply_changes is called before command dispatch
rg "fn tick|fn update|fn frame\|ApplicationCommand" crates/grida-canvas/src/window/application.rs | head -40Repository: gridaco/grida
Length of output: 195
🏁 Script executed:
# Check if there's explicit sync before export calls
rg "apply_changes.*export|export.*apply_changes" crates/grida-canvas/src --type-add 'rust:*.rs'Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Look at how frame rendering is called relative to export
rg "render_frame|export_node|surface_dispatch" crates/grida-canvas/src/window/application.rs -B 5 -A 5 | head -80Repository: gridaco/grida
Length of output: 3525
🏁 Script executed:
# Get the full picture cache implementation to understand if it has lazy rebuild
cat crates/grida-canvas/src/cache/picture.rsRepository: gridaco/grida
Length of output: 4394
🏁 Script executed:
# Check the full tick implementation to see the exact order of operations
sed -n '1,2000p' crates/grida-canvas/src/window/application.rs | grep -B 30 -A 50 "fn tick"Repository: gridaco/grida
Length of output: 9463
🏁 Script executed:
# Look for where apply_changes is actually called in the frame loop
rg "fn tick\|apply_changes" crates/grida-canvas/src/window/application.rs -B 5 -A 15 | head -120Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Find all places where apply_changes is called
rg "apply_changes" crates/grida-canvas/src --type-add 'rust:*.rs' -B 3 -A 3Repository: gridaco/grida
Length of output: 8886
🏁 Script executed:
# Get the complete frame() method to understand the exact execution order
rg "fn frame" crates/grida-canvas/src/window/application.rs -A 150 | head -200Repository: gridaco/grida
Length of output: 6408
🏁 Script executed:
# Check if there's any synchronization in export_node_as before calling export_node_image
sed -n '600,800p' crates/grida-canvas/src/window/application.rsRepository: gridaco/grida
Length of output: 7271
🏁 Script executed:
# Verify: does apply_changes happen in the frame() call before drawing?
sed -n '600,1000p' crates/grida-canvas/src/window/application.rs | grep -B 20 -A 30 "apply_changes"Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Look for when commands are dispatched relative to frame rendering
rg "command\(" crates/grida-canvas/src/window/application.rs -B 10 -A 5 | head -80Repository: gridaco/grida
Length of output: 2923
🏁 Script executed:
# Verify the actual sequence: when export is called vs when apply_changes is called
# Check if there's any explicit sync before export in export_node_as
sed -n '900,1100p' crates/grida-canvas/src/window/application.rsRepository: gridaco/grida
Length of output: 6991
🏁 Script executed:
# Look for any explicit apply_changes call or sync before export
rg "export_node_as\|TryCopyAsPNG" crates/grida-canvas/src/window/application.rs -B 20 | tail -40Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if export_node_image or export_node_as has any cache refresh logic
sed -n '2583,2700p' crates/grida-canvas/src/runtime/scene.rsRepository: gridaco/grida
Length of output: 4671
🏁 Script executed:
# Verify: is there any chance apply_changes() has already run before export?
# Check if commands are processed in the frame() call or separately
rg "command\(|ApplicationCommand" crates/grida-canvas/src --type-add 'rust:*.rs' | grep -v "test\|example" | head -30Repository: gridaco/grida
Length of output: 3069
🏁 Script executed:
# Look for any explicit sync or apply_changes in the export path one more time
rg "export_node_as\|export_node_image" crates/grida-canvas/src/window/application.rs -B 50 | grep -E "(apply_changes|mark_changed|sync|refresh)" | head -10Repository: gridaco/grida
Length of output: 39
Cache may be stale when export is called between mutations and the next frame.
The export_node_image() method reads from scene_cache without synchronization. Mutations call mark_changed() to queue cache invalidation, but caches only rebuild when apply_changes() runs at the start of the next frame. If a command like TryCopyAsPNG is dispatched after a scene mutation and before frame() is called, the export uses stale picture/paragraph/path data.
The docstring's claim that the cache is "always consistent with the scene" is only true after apply_changes() has run. Between mutations and the next frame, caches can lag behind the scene model. Either:
- Require a synchronized renderer state (document the implicit contract that exports must follow
apply_changes()on the frame loop), or - Add explicit refresh logic to
export_node_image()or its caller to ensure cache freshness before reading.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/runtime/scene.rs` around lines 2583 - 2643,
export_node_image reads from scene_cache without ensuring it's been rebuilt
after mutations, so exports can use stale pictures/paragraphs/paths if called
between mark_changed() and the next apply_changes() (e.g. from TryCopyAsPNG).
Fix by ensuring cache freshness before reading: either document that callers
must call apply_changes()/frame() before invoking export_node_image, or add
refresh logic inside export_node_image to detect/trigger a rebuild (call the
same cache-rebuild path used by apply_changes() or run a lightweight sync step)
when scene_cache is stale; reference export_node_image, scene_cache,
mark_changed, apply_changes, TryCopyAsPNG and frame to locate and implement the
change.
| /** | ||
| * Average cost per invocation in USD, used by the rate limiter. | ||
| * | ||
| * For flat per-image models this equals the exact price; for | ||
| * tiered models it is the mid-tier (medium quality, default size); | ||
| * for per-token models it is a rough estimate. | ||
| * | ||
| * @internal Not displayed to users. | ||
| */ | ||
| avg_cost_usd: number; |
There was a problem hiding this comment.
A fixed avg_cost_usd breaks cost-based limiting for token-priced models.
These Gemini cards declare real per_token pricing, but the field marked “used by the rate limiter” is only a rough per-call average. Long prompts or larger outputs can exceed the intended budget, while small requests get overcharged. For token-priced models, the limiter needs actual usage-based deduction whenever the provider returns it.
Also applies to: 600-627
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@editor/lib/ai/ai.ts` around lines 462 - 471, The avg_cost_usd field is a
fixed per-call estimate which breaks cost-based limiting for per-token models;
update the rate-limiting/billing path to prefer actual usage-based costs when
the provider returns token usage (e.g., use the response usage fields to compute
tokens_used * per_token_price) and only fall back to avg_cost_usd when usage
data or per_token pricing is unavailable; specifically modify the logic that
deducts budget (the rate limiter/consumption function) to detect model pricing
type (per_token) and apply provider-returned usage metrics, and ensure the card
metadata (avg_cost_usd) remains as a fallback for flat/tiered models or missing
usage.
| exportNode( | ||
| nodeId: string, | ||
| format: EmbedExportAs, | ||
| requestId?: string | ||
| ): Promise<ArrayBuffer | null> { | ||
| const rid = requestId ?? this.generateRequestId(); | ||
| return new Promise<ArrayBuffer | null>((resolve) => { | ||
| const unsub = this.on("export-result", (evt) => { | ||
| if (evt.requestId !== rid) return; | ||
| unsub(); | ||
| resolve(evt.data); | ||
| }); | ||
| this.send({ type: "grida:export", requestId: rid, nodeId, format }); | ||
| }); | ||
| } | ||
|
|
||
| /** | ||
| * Get the structural ancestry path from the scene root to a node. | ||
| * | ||
| * Returns an array `[root, ..., parent, nodeId]`, or `null` if the node | ||
| * does not exist in the current scene. | ||
| * | ||
| * @param nodeId - The target node. | ||
| * @param requestId - Optional caller-chosen ID for correlating the response. | ||
| * Auto-generated when omitted. | ||
| */ | ||
| getNodeIdPath( | ||
| nodeId: string, | ||
| requestId?: string | ||
| ): Promise<string[] | null> { | ||
| const rid = requestId ?? this.generateRequestId(); | ||
| return new Promise<string[] | null>((resolve) => { | ||
| const unsub = this.on("node-id-path-result", (evt) => { | ||
| if (evt.requestId !== rid) return; | ||
| unsub(); | ||
| resolve(evt.path); | ||
| }); | ||
| this.send({ | ||
| type: "grida:get-node-id-path", | ||
| requestId: rid, | ||
| nodeId, | ||
| }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Reject unanswered RPC-style calls.
exportNode() and getNodeIdPath() only resolve on a matching reply. If the iframe reloads, the bridge is older than the host SDK, or the instance is disposed before the response arrives, the Promise hangs forever. These calls need a timeout/abort path and pending-request cleanup on teardown.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@editor/scaffolds/embed/grida-embed.ts` around lines 201 - 244, exportNode and
getNodeIdPath create Promises that can hang if no reply arrives; add a
timeout/abort rejection and ensure pending-request cleanup on teardown. Modify
exportNode and getNodeIdPath to: (1) accept/derive an AbortSignal or timeout
value, (2) install a timer that unsubscribes the event listener and rejects the
Promise with a TimeoutError when elapsed, (3) on normal reply clear the timer,
unsubscribe and resolve, and (4) register the pending request in a central map
(by requestId) so the bridge's dispose/teardown can iterate and
reject/unsubscribe all outstanding Promises (cleaning timers) to avoid leaks;
reference the existing generateRequestId(), this.on/unsub pattern, the
"export-result" and "node-id-path-result" event handlers, and ensure
resolve/reject are only called once.
| table MarkdownEmbedNodeProperties { | ||
| /// Background fill paints for the markdown container. | ||
| fill_paints:[PaintStackItem]; | ||
| /// The GitHub Flavored Markdown source text. | ||
| markdown:string; | ||
| /// Per-corner rectangular corner radius. | ||
| corner_radius:RectangularCornerRadiusTrait; | ||
| } | ||
|
|
||
| /// Node variant: Markdown embed (render-only). | ||
| /// | ||
| /// Stores GFM markdown source and renders it directly to a Skia Picture | ||
| /// at paint time. Not structurally editable — treated as a single text block. | ||
| table MarkdownEmbedNode { | ||
| node:SystemNodeTrait (required); | ||
| layer:LayerTrait (required); | ||
| properties:MarkdownEmbedNodeProperties; | ||
| } |
There was a problem hiding this comment.
Add explicit field IDs to the new FlatBuffers tables.
The new MarkdownEmbedNodeProperties and MarkdownEmbedNode fields rely on implicit numbering right now. That's brittle for schema evolution and doesn't follow the repo's FlatBuffers rule.
Suggested schema fix
table MarkdownEmbedNodeProperties {
/// Background fill paints for the markdown container.
- fill_paints:[PaintStackItem];
+ fill_paints:[PaintStackItem] (id: 0);
/// The GitHub Flavored Markdown source text.
- markdown:string;
+ markdown:string (id: 1);
/// Per-corner rectangular corner radius.
- corner_radius:RectangularCornerRadiusTrait;
+ corner_radius:RectangularCornerRadiusTrait (id: 2);
}
/// Node variant: Markdown embed (render-only).
///
/// Stores GFM markdown source and renders it directly to a Skia Picture
/// at paint time. Not structurally editable — treated as a single text block.
table MarkdownEmbedNode {
- node:SystemNodeTrait (required);
- layer:LayerTrait (required);
- properties:MarkdownEmbedNodeProperties;
+ node:SystemNodeTrait (id: 0, required);
+ layer:LayerTrait (id: 1, required);
+ properties:MarkdownEmbedNodeProperties (id: 2);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| table MarkdownEmbedNodeProperties { | |
| /// Background fill paints for the markdown container. | |
| fill_paints:[PaintStackItem]; | |
| /// The GitHub Flavored Markdown source text. | |
| markdown:string; | |
| /// Per-corner rectangular corner radius. | |
| corner_radius:RectangularCornerRadiusTrait; | |
| } | |
| /// Node variant: Markdown embed (render-only). | |
| /// | |
| /// Stores GFM markdown source and renders it directly to a Skia Picture | |
| /// at paint time. Not structurally editable — treated as a single text block. | |
| table MarkdownEmbedNode { | |
| node:SystemNodeTrait (required); | |
| layer:LayerTrait (required); | |
| properties:MarkdownEmbedNodeProperties; | |
| } | |
| table MarkdownEmbedNodeProperties { | |
| /// Background fill paints for the markdown container. | |
| fill_paints:[PaintStackItem] (id: 0); | |
| /// The GitHub Flavored Markdown source text. | |
| markdown:string (id: 1); | |
| /// Per-corner rectangular corner radius. | |
| corner_radius:RectangularCornerRadiusTrait (id: 2); | |
| } | |
| /// Node variant: Markdown embed (render-only). | |
| /// | |
| /// Stores GFM markdown source and renders it directly to a Skia Picture | |
| /// at paint time. Not structurally editable — treated as a single text block. | |
| table MarkdownEmbedNode { | |
| node:SystemNodeTrait (id: 0, required); | |
| layer:LayerTrait (id: 1, required); | |
| properties:MarkdownEmbedNodeProperties (id: 2); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@format/grida.fbs` around lines 1491 - 1508, The new FlatBuffers tables
MarkdownEmbedNodeProperties and MarkdownEmbedNode are using implicit field
numbering; update both table definitions to use explicit field IDs for each
field (e.g., add id: N to fill_paints, markdown, corner_radius in
MarkdownEmbedNodeProperties and to node, layer, properties in
MarkdownEmbedNode), keep the required attribute on node and layer, choose stable
incremental IDs consistent with the project convention and mark them immutable
for future schema evolution.
| PathNode, // SVG path data (render-only, no vector network) | ||
| AttributedTextNode, | ||
| TrayNode, | ||
| MarkdownEmbedNode, // GFM markdown rendered directly to a Skia Picture (render-only) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
for f in \
crates/grida-canvas/tests/fbs_roundtrip.rs \
packages/grida-canvas-io/__tests__/format-roundtrip.test.ts
do
if [ -f "$f" ]; then
printf '\n== %s ==\n' "$f"
rg -n 'MarkdownEmbed|markdown' "$f" || true
else
printf '\nMISSING: %s\n' "$f"
fi
doneRepository: gridaco/grida
Length of output: 173
Add MarkdownEmbedNode to both round-trip test suites.
The new union member is not covered by the Rust (crates/grida-canvas/tests/fbs_roundtrip.rs) or TypeScript (packages/grida-canvas-io/__tests__/format-roundtrip.test.ts) round-trip tests. Without explicit test coverage, the codec paths for this variant may silently drop or mis-tag the value when encoding/decoding.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@format/grida.fbs` at line 1604, The new union member MarkdownEmbedNode must
be added to both round-trip test suites so its codec path is exercised; update
the Rust test crates/grida-canvas/tests/fbs_roundtrip.rs to include a sample
MarkdownEmbedNode value in the roundtrip cases (the test that constructs union
variants and asserts encode->decode equality) and update the TypeScript test
packages/grida-canvas-io/__tests__/format-roundtrip.test.ts to include a
corresponding MarkdownEmbedNode fixture in the test vector used by the
roundtrip/assertion logic so encoding/decoding for MarkdownEmbedNode is
verified.
| case "markdown": { | ||
| const mdNode = node as grida.program.nodes.MarkdownNode; | ||
|
|
||
| const fillPaintsFiltered = paints(mdNode, "fill"); | ||
| const fillPaintsOffset = format.paint.encode.fillPaints( | ||
| builder, | ||
| fillPaintsFiltered, | ||
| fbs.MarkdownEmbedNodeProperties.createFillPaintsVector | ||
| ); | ||
| const markdownOffset = builder.createString(mdNode.markdown ?? ""); | ||
| const mdCornerUniform = mdNode.corner_radius; | ||
| const cornerRadiusOffset = | ||
| format.shape.encode.rectangularCornerRadiusTrait(builder, { | ||
| rectangular_corner_radius_top_left: | ||
| mdNode.rectangular_corner_radius_top_left ?? mdCornerUniform, | ||
| rectangular_corner_radius_top_right: | ||
| mdNode.rectangular_corner_radius_top_right ?? mdCornerUniform, | ||
| rectangular_corner_radius_bottom_left: | ||
| mdNode.rectangular_corner_radius_bottom_left ?? | ||
| mdCornerUniform, | ||
| rectangular_corner_radius_bottom_right: | ||
| mdNode.rectangular_corner_radius_bottom_right ?? | ||
| mdCornerUniform, | ||
| corner_smoothing: mdNode.corner_smoothing, | ||
| }); |
There was a problem hiding this comment.
Preserve corner_radius on Markdown round-trips.
Line 1928 still treats mdNode.corner_radius as a valid input, but the decoder only reconstructs the four rectangular_corner_radius_* fields. A Markdown node authored with just the uniform radius will not deserialize back to the same TS shape, so any caller still keyed off corner_radius silently loses that value after save/load.
💡 Suggested fix
export function markdownEmbed(
n: fbs.MarkdownEmbedNode,
id: string,
systemNode: fbs.SystemNodeTrait,
layer: fbs.LayerTrait | null,
opacity: number,
layoutFields: ReturnType<typeof format.layout.decode.nodeLayout>,
effects?: grida.program.nodes.i.IEffects
): grida.program.nodes.MarkdownNode {
const props = n.properties();
const fillPaints = props
? format.paint.decode.fillPaints(props)
: undefined;
+ const cornerRadiusTrait = props?.cornerRadius() ?? null;
const cornerRadiusProps =
- format.shape.decode.rectangularCornerRadiusTrait(
- props?.cornerRadius() ?? null
- );
+ format.shape.decode.rectangularCornerRadiusTrait(cornerRadiusTrait);
+ const uniformCornerRadius =
+ cornerRadiusTrait &&
+ cornerRadiusProps.rectangular_corner_radius_top_left ===
+ cornerRadiusProps.rectangular_corner_radius_top_right &&
+ cornerRadiusProps.rectangular_corner_radius_top_left ===
+ cornerRadiusProps.rectangular_corner_radius_bottom_left &&
+ cornerRadiusProps.rectangular_corner_radius_top_left ===
+ cornerRadiusProps.rectangular_corner_radius_bottom_right
+ ? cornerRadiusProps.rectangular_corner_radius_top_left
+ : undefined;
const baseName = systemNode.name() ?? "markdown";
const baseActive = systemNode.active() ?? true;
const baseLocked = systemNode.locked() ?? false;
return {
type: "markdown",
id,
name: baseName,
active: baseActive,
locked: baseLocked,
opacity,
z_index: 0,
markdown: props?.markdown() ?? "",
...(fillPaints ? { fill_paints: fillPaints } : {}),
+ ...(uniformCornerRadius !== undefined
+ ? { corner_radius: uniformCornerRadius }
+ : {}),
rectangular_corner_radius_top_left:
cornerRadiusProps.rectangular_corner_radius_top_left,Also applies to: 5880-5915
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/grida-canvas-io/format.ts` around lines 1918 - 1942, The encoder for
the "markdown" case currently uses mdNode.corner_radius to derive the four
rectangular_corner_radius_* values (via mdCornerUniform and
format.shape.encode.rectangularCornerRadiusTrait) but the decoder only
reconstructs the four rectangular fields so a uniform corner_radius is lost on
round-trip; update the Markdown serialization/deserialization so the original
uniform radius is preserved: either include a dedicated corner_radius field in
the serialized MarkdownEmbedNodeProperties when mdNode.corner_radius is present
(when building in the "markdown" case) or, if schema can't change, modify the
decoder that reconstructs the TS shape to set corner_radius when all
rectangular_corner_radius_* values are equal to the same uniform value (update
the code that reads fbs.MarkdownEmbedNodeProperties back into a
grida.program.nodes.MarkdownNode to restore mdNode.corner_radius). Ensure you
reference the same symbols: mdNode, mdCornerUniform,
format.shape.encode.rectangularCornerRadiusTrait, and the Markdown decoder that
maps fbs.MarkdownEmbedNodeProperties -> MarkdownNode.
Summary
Canary batch covering canvas rendering, MarkdownNode, export performance, embed SDK, and AI model updates.
Canvas Rendering & Performance
FrameRenderStrategystruct centralizes per-frame rendering policy (zoom level, GPU availability), reducing duplication in the rendererVisibilitySetbitset allocation when all leaves are visible (fit-zoom on large scenes); ~2.4% improvement on 50k-node benchmarksexport_node_image()reuses the live renderer's cache instead of cloning the full scene; leaf export on 136K-node scene drops from ~560ms to <0.1msgrida-dev export-benchsubcommand + Criterionbench_exportfor A/B comparisons with pixel-exact diffingMarkdownNode
MarkdownNode(néeMarkdownEmbedNode) with FlatBuffers schema, Rust FBS encode/decode, TS FBS encode/decode, and DOM fallback renderergrida-markdown.css→ Stylo CSS → Taffy layout → Skia paint), removing the standalone 877-line markdown painter.mdfile drop/paste support, AI toolmake_from_markdownintegration, auto-height on resizeEmbed SDK
exportNode()— export a single node as PNG/JPEG/WEBP/BMP/PDF/SVGgetNodeIdPath()— retrieve structural ancestry path of a nodeAI Models & Budget
ai-modelsagent skill withmodel_info.pyimage model lookup (--image,--allflags)WASM
@grida/canvas-wasm0.91.0-canary.18Summary by CodeRabbit
New Features
Documentation