perf(canvas): rendering engine performance overhaul — FxHash, adaptive frame loop, layout fast-path, DenseNodeMap#601
Conversation
Add `run_frameloop_pan_pass` — a real-time GPU benchmark that goes through the actual `FrameLoop` decision path instead of calling queue_unstable/ queue_stable directly. This is the only benchmark that captures the real user-facing bottleneck: stable frames interrupting pan interactions. The pass uses real `thread::sleep()` at 60fps cadence, real wall-clock timestamps for FrameLoop, and the exact same apply_changes → build_plan → flush_with_plan code path as `Application::frame()`. Seven scroll interval scenarios sweep from 16ms (fast flick) to 500ms (discrete clicks), revealing how FrameLoop's stable frame decisions affect frame time distribution at each interaction speed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Added information about the new `frameloop` scenario type, which captures stable-frame jank during panning. Updated descriptions to clarify how `frameloop` scenarios differ from other benchmarks and their importance in analyzing panning smoothness and adaptive timing. Enhanced documentation on the implications of using `frameloop` for accurate performance measurements.
Replace SipHash (std HashMap) with a FxHash-style multiplicative hasher for all internal rendering caches keyed by NodeId (u64). These caches have trusted-input keys only, so DoS-resistant hashing is unnecessary. The fast hasher reduces per-lookup cost from ~25ns to ~3ns, yielding 5-15% improvement on pan/zoom operations (Criterion-verified). Affected caches: geometry, picture, vector_path, atlas, atlas_set, compositor, painter draw_order, and scene node_map. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedPull request was closed or merged during review WalkthroughIntroduces dense NodeId-oriented fast-hash and Vec-backed maps, refactors many caches and scene/graph/layout pipelines to use them, adds adaptive FrameLoop + frameloop benchmark flow, expands WASM perf tooling/bindings and scene-loading instrumentation, and adjusts renderer pan/zoom cache and overlay-only fast paths. Documentation and tooling updated (WASM, .deck support). Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as Bench Runner
participant App as Application
participant FrameLoop as FrameLoop
participant Renderer as Renderer
participant SceneCache as Scene/Cache
participant Backend as GPU/Backend
Runner->>App: drive scroll ticks (sleep cadence)
App->>FrameLoop: invalidate(now)
FrameLoop->>FrameLoop: update input_cadence_ms (EMA)
Runner->>FrameLoop: poll(now)
FrameLoop-->>App: FrameQuality (Stable/Unstable)
App->>Renderer: apply_changes(camera_change, stable)
Renderer->>SceneCache: warm/build plan / use pan cache?
alt pan-cache blit path (GPU, overlay-only)
Renderer->>Backend: blit_content_cache(dx,dy)
Backend-->>Renderer: success/fail
else full draw
Renderer->>Backend: build_frame_plan & flush_with_plan
Backend-->>Renderer: drawn
end
Renderer-->>App: content_changed
App->>FrameLoop: complete(FrameQuality)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
perf(canvas): use FxHash for NodeId-keyed caches
Slow panning (80-120ms trackpad intervals) was laggier than fast panning because the fixed 50ms stable delay fired between every scroll event, nuking the pan image cache and forcing expensive full redraws. Two changes fix this: 1. FrameLoop tracks input cadence via EMA and extends the effective stable delay to max(base, cadence × 2.5). During 80ms scrolling the delay becomes ~200ms, preventing stable intrusions mid-interaction. Cadence resets on session breaks (>500ms gap). 2. Pan image cache is no longer invalidated on stable-only frames in both queue() and apply_changes(). The render path recaptures the cache from every full-quality draw, so the next unstable frame always has a fresh cache. Benchmark results (fl_80ms, 200 nodes): - Stable frame intrusions: 25 → 1 - p50 frame time: 3,025µs → 163µs (18.6× improvement) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduced a new method `scene_envelope` in `SceneCache` to return the bounding envelope of all scene content in the R-tree. This method provides an O(1) operation by reading the cached root node envelope and returns `None` if the scene is empty. This enhancement improves the efficiency of scene content management.
Updated the `Renderer` to improve handling of cached content for overlay-only frames, allowing for faster rendering when neither scene data nor camera changes occur. Introduced a new method `blit_content_cache` to restore cached content efficiently. Modified `apply_changes` to return a boolean indicating whether content needs re-rendering, optimizing the rendering process during stable frames. Additionally, updated the `UnknownTargetApplication` to utilize these enhancements for better performance during frame rendering.
Introduced the `intersects_topmost` method in `HitTester`, which returns the shallowest nodes whose bounding boxes intersect a given rectangle. This method improves performance by ensuring that only ancestor nodes are returned, avoiding unnecessary descendant checks. Updated the `handle_pointer_move` function in `SurfaceState` to utilize this new method for more efficient selection during pointer interactions.
…raction Refactored the image loading process to improve the handling of raster images. Introduced a new `load_raster` function that reads image bytes and registers them with the renderer, allowing for direct scene creation. Updated the `extract_image_urls` function to collect URLs from various node types more efficiently by utilizing a dedicated helper function for processing paints. This change enhances the overall performance and clarity of the image extraction logic.
… during application closure Introduced an `exiting` boolean flag in the `NativeApplication` struct to prevent event processing after a `CloseRequested` event. This change ensures that events are not processed while the application is in the process of shutting down, improving stability and preventing potential issues during termination.
…tion reduction Reduce syncDocument time for 136K-node scenes (yrr-main.grida) by applying the property-split pattern to layers, effects, and layout hot paths — same approach that cut geometry DFS from 4,000ms to 16ms. Key changes: 1. Faster hashers (rustc-hash): Replace std HashMap with FxHashMap in the FBS decode hot path for string_to_internal_id, children_by_parent, and position_map. ~5-10% decode speedup for 136K entries. 2. NodeLayerCore extraction: Add compact ~20-byte Copy struct with active/opacity/blend_mode/mask/clips_content/has_effects/node_type/ is_flex, stored in DenseNodeMap on SceneGraph. Effect tree and layers DFS read this instead of the full 500+ byte Node enum for visibility and dispatch checks. 3. Layout engine optimization: Use NodeLayerCore for is_layout_node and is_flex_container checks, NodeGeoData for schema position/size reads in extract_all_layouts. Eliminates full-Node reads for ~90% of nodes in the layout extraction phase. 4. SmallVec for Paints: Change Paints internal storage from Vec<Paint> to SmallVec<[Paint; 1]>. Most nodes have 0-1 fills/strokes, avoiding ~272K heap allocations for a 136K-node scene. 5. Geometry resolve_layout robustness: Replace .expect() with graceful fallback to schema data for Container nodes missing layout results (orphan nodes not reachable from scene roots). WASM-on-Node benchmark (yrr-main.grida, 136K nodes): switch phase: 6,000ms → 2,668ms (-55%) total (load+switch): ~10,200ms → ~7,241ms (-29%) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ansform formatting - Updated NodeIdentifier creation to handle null id by using builder.createString(id) when id is not provided. - Reformatted the postLayoutTransform parameters for better readability, aligning the arguments vertically.
…s storage - Eliminated the SmallVec dependency from the Paints structure, replacing it with a standard Vec<Paint> for internal storage. - Updated related methods to accommodate the new storage type, simplifying the code and reducing heap allocations. - Introduced a new method for creating Paints from pre-extracted data to enhance performance during scene graph construction.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e955d0c442
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if let Some((_, scene)) = self.loaded_scenes.iter().find(|(id, _)| id == scene_id) { | ||
| self.renderer.load_scene(scene.clone()); | ||
| if let Some(pos) = self.loaded_scenes.iter().position(|(id, _)| id == scene_id) { | ||
| let (_, scene) = self.loaded_scenes.swap_remove(pos); |
There was a problem hiding this comment.
Preserve loaded scenes when switching
Using swap_remove here mutates loaded_scenes by deleting the selected scene, so multi-scene documents lose entries after each switch. In practice, after switching to a scene once, subsequent switch_scene calls can no longer find that scene (and loaded_scene_ids() shrinks over time), which breaks normal scene toggling workflows.
Useful? React with 👍 / 👎.
| // to the previous frame. Restore it from the pan image cache and | ||
| // skip the expensive frame-plan build + full draw. The overlay is | ||
| // still re-drawn below so marquee/selection visuals update correctly. | ||
| if !stable && !content_changed && self.renderer.blit_content_cache() { |
There was a problem hiding this comment.
Make overlay-only cache path reachable
This guard is logically unsatisfiable with the current stable computation: when content_changed is false, camera_change.any_changed() is also false, and stable is forced true by quality == Stable || !camera_change.any_changed(). That means the overlay-only fast path never executes, so overlay-only invalidations still go through full plan/render work instead of reusing the cached content.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
crates/grida-dev/src/platform/native_application.rs (1)
237-252:⚠️ Potential issue | 🟡 Minor
RedrawRequestedcan still be processed afterexitingis set.The
RedrawRequestedhandler (lines 237-242) executes before theexitingguard (line 250). If a redraw event arrives afterCloseRequestedhas setexiting = true, the redraw will still execute, potentially calling into partially torn-down state.Consider moving the
exitingguard before theRedrawRequestedcheck:Proposed fix
fn window_event( &mut self, event_loop: &winit::event_loop::ActiveEventLoop, _window_id: winit::window::WindowId, event: WindowEvent, ) { + if self.exiting { + return; + } + if let WindowEvent::RedrawRequested = &event { self.app.redraw_requested(); if let Err(e) = self.gl_surface.swap_buffers(&self.gl_context) { eprintln!("Error swapping buffers: {:?}", e); } } if let WindowEvent::CloseRequested = &event { self.exiting = true; event_loop.exit(); return; } - - if self.exiting { - return; - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-dev/src/platform/native_application.rs` around lines 237 - 252, The RedrawRequested branch can run after CloseRequested sets self.exiting = true; move the self.exiting guard earlier so events are ignored once exiting is true: check self.exiting at the top of the event handler (or immediately after matching the event) and return early before handling WindowEvent::RedrawRequested (which calls self.app.redraw_requested and self.gl_surface.swap_buffers), ensuring CloseRequested (which sets self.exiting = true and calls event_loop.exit) prevents further redraw processing.packages/grida-canvas-io/format.ts (1)
1337-1350:⚠️ Potential issue | 🟠 MajorDon't drop non-empty dash patterns for basic shapes.
The new comment says only empty dash arrays should be omitted, but this path never writes
strokeDashArrayat all. Any dashed rectangle/ellipse/polygon/star will now serialize as a solid stroke.🩹 Suggested fix
+ const dashArrayOffset = + "stroke_dash_array" in shapeNode && + Array.isArray(shapeNode.stroke_dash_array) && + shapeNode.stroke_dash_array.length > 0 + ? fbs.StrokeStyle.createStrokeDashArrayVector( + builder, + shapeNode.stroke_dash_array + ) + : 0; + fbs.StrokeStyle.startStrokeStyle(builder); fbs.StrokeStyle.addStrokeCap( builder, styling.encode.strokeCap(shapeNode.stroke_cap) ); fbs.StrokeStyle.addStrokeJoin( builder, styling.encode.strokeJoin(shapeNode.stroke_join) ); fbs.StrokeStyle.addStrokeAlign(builder, fbs.StrokeAlign.Inside); fbs.StrokeStyle.addStrokeMiterLimit(builder, 4.0); - // Skip empty dash array vector — decoder reads null as no dashes + if (dashArrayOffset !== 0) { + fbs.StrokeStyle.addStrokeDashArray(builder, dashArrayOffset); + } const strokeStyleOffset = fbs.StrokeStyle.endStrokeStyle(builder);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/grida-canvas-io/format.ts` around lines 1337 - 1350, The StrokeStyle serialization path never writes non-empty dash patterns so dashed basic shapes become solid; update the block around fbs.StrokeStyle.startStrokeStyle / endStrokeStyle to check shapeNode.stroke_dash_array (or strokeDashArray) and, if it exists and is non-empty, create the FlatBuffers vector for the dash values and call fbs.StrokeStyle.addStrokeDashArray(builder, dashArrayOffset) before calling fbs.StrokeStyle.endStrokeStyle(builder); keep skipping only when the array is empty/null. Ensure you reference the existing symbols fbs.StrokeStyle.addStrokeDashArray, shapeNode.stroke_dash_array (or strokeDashArray) and the strokeStyleOffset creation so the vector is attached to the same StrokeStyle being ended.crates/grida-dev/src/main.rs (1)
181-197:⚠️ Potential issue | 🟠 MajorRemote raster URLs still fall through to
.gridadecode.Both branches only treat rasters specially when
!is_url(source), sohttps://.../image.png/.jpg/.webpstill gets downloaded and handed togrida_file::decode_all(). That turns an advertised supported input into a startup failure; please add a byte-based raster path for URL sources or reject them explicitly before decode.Also applies to: 239-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-dev/src/main.rs` around lines 181 - 197, load_scenes_from_source currently only treats raster inputs specially when !is_url(source), so remote raster URLs fall through to grida_file::decode_all; fix by detecting raster extensions for URL sources (use is_url and is_raster_ext on the URL path/filename) and route them to a byte-based raster loader instead of decode_all: after read_source_bytes(source).await, if the source looks like a raster extension call the raster-loading path (e.g., implement/use a helper that accepts bytes and returns a Raster/Scene similar to load_raster -> produce r.scene) otherwise continue to grida_file::decode_all(&bytes); update or add a byte-based loader helper and reference load_raster, read_source_bytes, is_url, is_raster_ext, grida_file::decode_all, and scene_from_svg_path so reviewers can locate the changes.crates/grida-canvas/src/painter/layer.rs (1)
456-477:⚠️ Potential issue | 🟠 MajorUse the layer’s effective opacity for this fast-path gate.
This now keys
non_overlapping_fill_pathgeneration off the local node opacity, but the painter decides whether it can skipsave_layer_alphafromPainterPictureLayerBase::opacityafter ancestor/group opacity has already been folded in. An opaque child under a translucent parent will therefore stop getting a difference path and fall back to the slower save-layer path even though its effective opacity is still< 1.0.Pass the same effective opacity that you store on the layer (
inner_opacityfor render-surface own layers, otherwise the accumulated layer opacity), not justn.opacity.Example direction
fn compute_non_overlapping_fill_path( shape: &PainterShape, stroke_path: Option<&skia_safe::Path>, stroke_overlaps_fill: bool, fills: &Paints, strokes: &Paints, - node_opacity: f32, + layer_opacity: f32, ) -> Option<skia_safe::Path> { - if node_opacity >= 1.0 + if layer_opacity >= 1.0 || !stroke_overlaps_fill || fills.is_empty() || strokes.is_empty() { return None; }// Call sites should pass the opacity actually assigned to PainterPictureLayerBase::opacity.Also applies to: 575-577, 685-687, 747-749, 803-805, 859-861, 915-917, 971-973, 1163-1165, 1261-1263
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/painter/layer.rs` around lines 456 - 477, The gate in compute_non_overlapping_fill_path (and the other similar fast-paths) incorrectly uses the node's local opacity (n.opacity) instead of the layer's effective/accumulated opacity; update call sites so they pass the layer's effective opacity (the inner_opacity used/stored on PainterPictureLayerBase::opacity or the accumulated layer opacity) into compute_non_overlapping_fill_path (and the analogous functions at the other listed sites) and use that parameter for the node_opacity check so the Difference path is computed when the effective opacity < 1.0 even if the local node opacity is 1.0.crates/grida-dev/src/bench/runner.rs (1)
1384-1388:⚠️ Potential issue | 🟠 MajorReset camera translation before each timed scenario.
set_zoom()does not undo the translations fromwarmup(renderer)or the previous pass. The newrealtimeandframeloopsweeps both finish at non-zero offsets, so later scenarios are benchmarking a different viewport region than earlier ones.Also applies to: 1479-1483
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-dev/src/bench/runner.rs` around lines 1384 - 1388, Renderer translations from warmup and previous passes are not cleared, so call the camera reset/translation-clear method immediately before setting zoom and before each timed scenario to ensure a consistent viewport; specifically, insert a call to reset the camera translation (e.g. renderer.camera.set_translation(0.0, 0.0) or renderer.camera.reset_translation()) just before renderer.camera.set_zoom(rt.zoom) (and mirror the same insertion in the other occurrence around lines with warmup(renderer) later in the file) so each timed sweep starts with zero offset.crates/grida-canvas/src/node/scene_graph.rs (1)
711-715:⚠️ Potential issue | 🟠 MajorKeep
geo_data/layer_core/has_flexin sync with node mutations.
get_node_mut()now exposes raw mutation while the fast-path caches stay unchanged, andremove_node()only clearsgeo_data. After editing geometry/effects/layout or deleting the last flex node, geometry/layout/layer code can read stale cached state. Please funnel node edits through a recomputing helper and clear/recomputelayer_coreandhas_flexon removal.Also applies to: 728-733
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/node/scene_graph.rs` around lines 711 - 715, get_node_mut currently returns a raw mutable Node causing cached fast-path state (geo_data, layer_core, has_flex) to go out of sync; instead route mutations through a recompute helper (e.g., add or use a method like get_node_mut_recompute_or_mutate) that marks/updates geo_data and triggers recompute_layer_core_and_has_flex after any geometry/effects/layout changes, and ensure remove_node clears geo_data and calls the same recompute routine to recalc layer_core and has_flex (apply same fix to the other occurrence around lines 728-733).
🧹 Nitpick comments (6)
editor/grida-canvas/editor.ts (1)
3249-3259: Computeswitchtiming only when a scene switch actually runs.When
targetSceneis missing, currentswitchtiming can still show a small non-zero value, which makes logs slightly misleading.Optional refinement
- if (targetScene) { + let switchMs = 0; + if (targetScene) { + const tSwitchStart = __DEV__ ? performance.now() : 0; __switchSceneGuard++; surface.switchScene(targetScene); + switchMs = __DEV__ ? performance.now() - tSwitchStart : 0; } const tSwitch = __DEV__ ? performance.now() : 0; @@ `scene=${targetScene ?? "(none)"} in ${(tRedraw - t0).toFixed(0)}ms` + ` (encode=${encodeMs.toFixed(0)}ms load=${loadMs.toFixed(0)}ms` + - ` switch=${(tSwitch - tLoad).toFixed(0)}ms redraw=${(tRedraw - tSwitch).toFixed(0)}ms)` + ` switch=${switchMs.toFixed(0)}ms redraw=${(tRedraw - tSwitch).toFixed(0)}ms)` );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@editor/grida-canvas/editor.ts` around lines 3249 - 3259, The tSwitch timestamp is always set even when no scene switch occurs, producing a misleading non-zero "switch" time in the console.log; change the logic so tSwitch (and thus switch duration) is only measured when a scene switch actually runs (i.e., when targetScene is present and you perform the switch), and in the log compute switchMs conditionally (e.g., set switchMs = targetScene ? (tSwitch - tLoad) : 0 or omit it) so the printed `switch=` value is zero/absent when no switch happened; update references around tSwitch, targetScene, and the console.log that surrounds surface.redraw() accordingly.docs/wg/research/chromium/node-data-layout.md (1)
37-47: Consider adding language specifiers to fenced code blocks.Four code blocks lack language identifiers. While
format: mdprevents parsing issues, adding specifiers improves syntax highlighting and reader clarity.📝 Suggested language specifiers
For the C++ structure examples (lines 37-47, 65-77, 157-162), add
cpp:-``` +```cpp SVGElement (always allocated):For the ASCII flow diagram (lines 180-185), add
text:-``` +```text Main Thread → commit → Pending Tree → activation → Active TreeAlso applies to: 65-77, 157-162, 180-185
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/wg/research/chromium/node-data-layout.md` around lines 37 - 47, Add language specifiers to the fenced code blocks showing the C++-style structures and the ASCII flow diagram: mark the blocks that show "SVGElement" and "SVGElementRareData" (and the other C++ structure examples around the same area) with ```cpp to enable C++ syntax highlighting, and mark the ASCII flow diagram containing "Main Thread → commit → Pending Tree → activation → Active Tree" with ```text so it renders as plain text; locate the fences around the blocks that start with "SVGElement (always allocated):" and the other similar structure examples and update their opening backticks accordingly.crates/grida-canvas/src/hittest/hit_tester.rs (1)
329-331: Tone down complexity claim in the doc comment.Line 330 says “amortized O(1)”, but this helper still walks parent chains and is O(depth) per call in the worst case. Updating wording will avoid misleading future perf assumptions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/hittest/hit_tester.rs` around lines 329 - 331, Update the doc comment for the helper in hit_tester.rs (the function documented as walking the parent chain, e.g., is_ancestor_selected) to remove the claim “amortized O(1)” and instead state that the operation is O(depth) per call (but typically short when parents are selected before their children); keep the rest of the description unchanged so callers understand it walks up ancestors and stops on the first match.crates/grida-canvas/src/resources/mod.rs (1)
144-146: Consider moving this TODO into a schema-level paint accessor.This per-variant list will drift the next time a paint-bearing
Nodeis added, and then image preloading will silently miss that variant. ANode::paint_slices()/visitor would keep the extraction logic centralized.I can sketch that accessor if you want to spin it into a follow-up.
Also applies to: 155-207
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/resources/mod.rs` around lines 144 - 146, The TODO about matching every Node variant should be replaced by adding a centralized schema-level paint accessor (e.g., implement a Node::paint_slices() or a visitor) and use that from this module instead of per-variant matching; locate the per-variant extraction logic in resources::mod.rs (the block guarded by #[allow(dead_code)] and the matching list spanning the following lines) and refactor it to call the new Node::paint_slices() (or a PaintVisitor) so any future paint-bearing Node only needs to update the accessor rather than every consumer.crates/grida-canvas/src/node/repository.rs (1)
86-94: Consider whetherFromIteratorshould preserve IDs.The current
FromIteratorimplementation discards incomingNodeIdvalues and auto-generates new ones viarepo.insert(node). This may be intentional for cloning semantics, but it differs fromfilter()which preserves IDs. Consider documenting this behavior or usinginsert_with_idif ID preservation is expected.💡 Optional: preserve IDs in FromIterator
impl FromIterator<(NodeId, Node)> for NodeRepository { fn from_iter<T: IntoIterator<Item = (NodeId, Node)>>(iter: T) -> Self { let mut repo = Self::new(); - for (_, node) in iter { - repo.insert(node); + for (id, node) in iter { + repo.insert_with_id(id, node); } repo } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/node/repository.rs` around lines 86 - 94, The FromIterator impl for NodeRepository currently discards incoming NodeId values by calling repo.insert(node) which generates new IDs; update it to preserve IDs by using the repository method that inserts with a specified id (e.g., call repo.insert_with_id(id, node) or repo.insert_with_id((node_id, node)) matching your API) so incoming (NodeId, Node) pairs keep their original IDs, or if discarding is intentional, update the NodeRepository::from_iter impl's doc comment to explicitly state that IDs are re-generated; locate the impl FromIterator<(NodeId, Node)> for NodeRepository and either replace repo.insert(node) with the insert_with_id variant using the loop variable id or add a doc comment explaining the behavior.crates/grida-canvas/src/cache/picture.rs (1)
63-63: Nit: unnecessary.clone()onCopytype.
NodeIdisu64which implementsCopy, so the clone is redundant.✨ Remove unnecessary clone
- self.variant_store.get(&(id.clone(), variant_key)) + self.variant_store.get(&(*id, variant_key))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/grida-canvas/src/cache/picture.rs` at line 63, The code calls self.variant_store.get(&(id.clone(), variant_key)) but id is a NodeId (u64) and implements Copy, so the .clone() is redundant; update the call to use the copied value without cloning (e.g., pass &(id, variant_key) or otherwise rely on id being Copy) in the variant_store.get invocation to remove the unnecessary .clone().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/grida-canvas/src/cache/geometry.rs`:
- Around line 406-429: The TextSpan branch currently computes local_bounds but
returns local_bounds, causing parents to receive object-space bounds; change the
return to world-space bounds so parent unions are correct. In the
GeoNodeKind::TextSpan branch (around GeometryEntry creation and
cache.entries.insert(*id, ...)), replace the returned value (currently using
local_bounds / variable bounds) with the world-space rectangle (world_bounds) so
recursive callers receive world-space geometry.
In `@crates/grida-canvas/src/io/io_grida_fbs.rs`:
- Around line 406-423: The loop over node_entries currently only builds
node_pairs/node_names/position_map but still lets both branches call
SceneGraph::new_from_snapshot(), negating the new fast path; while consuming
node_entries also compute and collect geo_data, layer_core and has_flex (by
invoking extract_geo_data/extract_layer_core logic on each e.node) and store
them into the same preextracted structures, then replace later calls to
SceneGraph::new_from_snapshot() with
SceneGraph::new_from_snapshot_preextracted(node_pairs, node_names, position_map,
geo_data, layer_core, has_flex) so the graph is built from the precomputed
records instead of walking nodes again (update the identical logic applied in
the other branch around the same region as well).
In `@crates/grida-canvas/src/layout/engine.rs`:
- Around line 350-360: The current early return when encountering a non-flex
container (lc.is_flex / LayoutMode::Normal) stops traversal and prevents finding
nested flex subtrees; instead, still create the Taffy leaf for the current node
via self.tree.new_leaf(*node_id, style) to preserve parent's flex-size behavior,
but do NOT return immediately—iterate into the node's children and recurse (the
same routine that eventually calls extract_all_layouts) so any deeper
auto-layout/flex containers are discovered and added to Taffy; keep the existing
leaf creation logic but remove the premature return and add a child traversal
step so nested flex trees are handled.
In `@crates/grida-canvas/src/node/scene_graph.rs`:
- Around line 98-106: The current union of stroke and filter outsets uses
RenderBoundsInflation::expand (which applies per-side max), causing
underestimation when filters apply to already-stroked geometry; change the
combination in expand_inflation_with_effects (and the same logic referenced at
the other block) to add the per-side inflations instead of taking the max so
that a blurred outside stroke becomes stroke + blur (i.e., compute
top/right/bottom/left as stroke_side + filter_side), or add a new method like
RenderBoundsInflation::additive_merge and use that where stroke and filter
outsets must compose additively rather than with max.
- Around line 201-210: The TextSpan branch currently passes
super::schema::LayerEffects::default() into compute_inflation_uniform, ignoring
actual text effects; update Node::TextSpan so render_bounds_inflation calls
compute_inflation_uniform with the TextSpan's real effects (e.g., pass
&n.effects or the actual effect field on the TextSpan) instead of
LayerEffects::default(), ensuring compute_inflation_uniform receives the span's
effects to compute correct inflation.
In `@crates/grida-canvas/src/os/emscripten.rs`:
- Around line 32-42: The new FFI blocks declare extern "C" functions without the
required unsafe qualifier for Rust 2024 (e.g., emscripten_get_now,
emscripten_random, emscripten_get_device_pixel_ratio); update each extern block
that declares Emscripten functions to use unsafe extern "C" instead of extern
"C" (and apply the same change to the other extern blocks in the same
file/range) so all external function declarations match the existing unsafe
pattern used elsewhere in this file.
In `@crates/grida-canvas/src/resources/mod.rs`:
- Around line 131-139: collect_image_urls_from_paints currently pushes every
image ref occurrence causing duplicate loads; change its signature to accept a
mutable "seen" set (e.g., &mut HashSet<String> or &mut FxHashSet<String>)
alongside the out Vec and use resource_ref_url(&img.image) to only push and mark
the URL the first time it's seen (preserving first-seen order), and update all
call sites (the other collect_image_urls_from_paints usages in this file) to
pass the same seen set so load_scene_images / ImageLoaded events aren't emitted
for duplicates.
In `@crates/grida-canvas/src/runtime/effect_tree.rs`:
- Around line 286-294: The code currently pushes RenderSurfaceReason::Shadow if
effects.shadows is non-empty even when all shadow records are inactive; update
the classification in the block handling node.effects() (the code that inspects
effects.blur and effects.shadows in effect_tree.rs) to only push
RenderSurfaceReason::Shadow when at least one shadow record is active (e.g.,
check effects.shadows.iter().any(|s| s.active) or equivalent) so inactive shadow
entries do not force surface allocation; keep the existing checks for blur
(effects.blur) unchanged.
In `@crates/grida-canvas/src/window/application.rs`:
- Around line 1159-1176: The fast-path currently checks !stable but stable is
set true whenever camera_change.any_changed() is false, so overlay-only
invalidations never hit the blit cache; change the branch to gate on the frame
quality instead of stable: in the if at the start of the block that currently
reads "if !stable && !content_changed && self.renderer.blit_content_cache()",
replace the "!stable" check with a check against the settle/real-settle quality
(e.g. "quality != FrameQuality::Settle" or the project's equivalent enum
variant), leaving "!content_changed && self.renderer.blit_content_cache()" and
the existing body (self.renderer.camera.consume_change(),
draw_and_flush_devtools_overlay(), self.frame_loop.complete(quality),
self.last_frame_time = __frame_start, return true) intact so overlay-only frames
use the blit fast path except for the actual settle frame.
- Around line 662-666: The switch_scene implementation currently uses
swap_remove on self.loaded_scenes (in fn switch_scene) which removes the scene
entry so it no longer appears in loaded_scene_ids(); instead locate the scene by
position or reference, clone or borrow the Scene value and call
self.renderer.load_scene(scene_clone_or_ref) without removing the tuple from
self.loaded_scenes, and keep calling self.queue(); ensure you stop using
swap_remove and use indexing/iter().find or iter().position plus
cloning/borrowing to preserve the entry.
In `@crates/grida-dev/src/bench/runner.rs`:
- Around line 1484-1489: The call to run_frameloop_pan_pass hard-codes a
2000.0ms session and ignores the shared frames argument, causing mismatch with
bench-report's meta.frames; replace the literal with a duration derived from the
frames argument (e.g. use fl_s.scroll_interval_ms * frames as f64 or equivalent)
so the pan session length is computed from frames and scroll_interval_ms (keep
the other args: renderer and fl_s.dx, and reference run_frameloop_pan_pass,
fl_s.scroll_interval_ms, fl_s.dx, and frames).
- Around line 693-702: The current single 'if' only injects one scroll even if
now_ms has advanced past multiple scroll_interval_ms boundaries; replace the 'if
(now_ms >= next_scroll_ms) { ... }' with a loop that repeatedly processes scroll
events while now_ms >= next_scroll_ms so you "catch up" all overdue ticks:
inside the loop call renderer.camera.translate(dx * pan_direction / zoom, 0.0),
frame_loop.invalidate(now_ms), increment next_scroll_ms by scroll_interval_ms,
increment scroll_events_fired, and flip pan_direction every 25 events
(scroll_events_fired % 25 == 0) so camera travel and cadence tracking stay
accurate under long frames.
In `@crates/grida-dev/src/main.rs`:
- Around line 317-328: In load_raster(), avoid reading the file twice: read the
file once into bytes (the existing bytes variable), derive image dimensions from
that in-memory buffer (e.g., image::image_dimensions or image::io::Reader over a
Cursor<&[u8]>) instead of calling image_dimensions(path), and remove the second
filesystem read; if this function may run on the async executor, ensure the
std::fs::read and any image decoding are performed inside a blocking context
(e.g., tokio::task::spawn_blocking) so you don't block Tokio workers—update
references to bytes, image_dimensions, and Size/RasterScene accordingly.
In `@docs/wg/feat-2d/wasm-benchmarking.md`:
- Around line 324-329: The docs claim the Node harness parses `[load_scene]`
stderr and points to a specific fixture and command, but the actual test harness
(bench-load-scene.test.ts) does not parse `[load_scene]`, auto-discovers
fixtures from lib/__test__/fixtures/local/, and requires different invocation;
update the "Internal Timing" and the other block (lines ~333-336) to reflect the
real behavior by removing the `[load_scene]` parsing claim, documenting that
bench-load-scene.test.ts auto-discovers fixtures under
lib/__test__/fixtures/local/, and replace the example invocation and hardcoded
fixture path (fixtures/local/perf/local/yrr-main.grida) with the correct
discovery behavior and invocation used by the test harness
(bench-load-scene.test.ts).
---
Outside diff comments:
In `@crates/grida-canvas/src/node/scene_graph.rs`:
- Around line 711-715: get_node_mut currently returns a raw mutable Node causing
cached fast-path state (geo_data, layer_core, has_flex) to go out of sync;
instead route mutations through a recompute helper (e.g., add or use a method
like get_node_mut_recompute_or_mutate) that marks/updates geo_data and triggers
recompute_layer_core_and_has_flex after any geometry/effects/layout changes, and
ensure remove_node clears geo_data and calls the same recompute routine to
recalc layer_core and has_flex (apply same fix to the other occurrence around
lines 728-733).
In `@crates/grida-canvas/src/painter/layer.rs`:
- Around line 456-477: The gate in compute_non_overlapping_fill_path (and the
other similar fast-paths) incorrectly uses the node's local opacity (n.opacity)
instead of the layer's effective/accumulated opacity; update call sites so they
pass the layer's effective opacity (the inner_opacity used/stored on
PainterPictureLayerBase::opacity or the accumulated layer opacity) into
compute_non_overlapping_fill_path (and the analogous functions at the other
listed sites) and use that parameter for the node_opacity check so the
Difference path is computed when the effective opacity < 1.0 even if the local
node opacity is 1.0.
In `@crates/grida-dev/src/bench/runner.rs`:
- Around line 1384-1388: Renderer translations from warmup and previous passes
are not cleared, so call the camera reset/translation-clear method immediately
before setting zoom and before each timed scenario to ensure a consistent
viewport; specifically, insert a call to reset the camera translation (e.g.
renderer.camera.set_translation(0.0, 0.0) or
renderer.camera.reset_translation()) just before
renderer.camera.set_zoom(rt.zoom) (and mirror the same insertion in the other
occurrence around lines with warmup(renderer) later in the file) so each timed
sweep starts with zero offset.
In `@crates/grida-dev/src/main.rs`:
- Around line 181-197: load_scenes_from_source currently only treats raster
inputs specially when !is_url(source), so remote raster URLs fall through to
grida_file::decode_all; fix by detecting raster extensions for URL sources (use
is_url and is_raster_ext on the URL path/filename) and route them to a
byte-based raster loader instead of decode_all: after
read_source_bytes(source).await, if the source looks like a raster extension
call the raster-loading path (e.g., implement/use a helper that accepts bytes
and returns a Raster/Scene similar to load_raster -> produce r.scene) otherwise
continue to grida_file::decode_all(&bytes); update or add a byte-based loader
helper and reference load_raster, read_source_bytes, is_url, is_raster_ext,
grida_file::decode_all, and scene_from_svg_path so reviewers can locate the
changes.
In `@crates/grida-dev/src/platform/native_application.rs`:
- Around line 237-252: The RedrawRequested branch can run after CloseRequested
sets self.exiting = true; move the self.exiting guard earlier so events are
ignored once exiting is true: check self.exiting at the top of the event handler
(or immediately after matching the event) and return early before handling
WindowEvent::RedrawRequested (which calls self.app.redraw_requested and
self.gl_surface.swap_buffers), ensuring CloseRequested (which sets self.exiting
= true and calls event_loop.exit) prevents further redraw processing.
In `@packages/grida-canvas-io/format.ts`:
- Around line 1337-1350: The StrokeStyle serialization path never writes
non-empty dash patterns so dashed basic shapes become solid; update the block
around fbs.StrokeStyle.startStrokeStyle / endStrokeStyle to check
shapeNode.stroke_dash_array (or strokeDashArray) and, if it exists and is
non-empty, create the FlatBuffers vector for the dash values and call
fbs.StrokeStyle.addStrokeDashArray(builder, dashArrayOffset) before calling
fbs.StrokeStyle.endStrokeStyle(builder); keep skipping only when the array is
empty/null. Ensure you reference the existing symbols
fbs.StrokeStyle.addStrokeDashArray, shapeNode.stroke_dash_array (or
strokeDashArray) and the strokeStyleOffset creation so the vector is attached to
the same StrokeStyle being ended.
---
Nitpick comments:
In `@crates/grida-canvas/src/cache/picture.rs`:
- Line 63: The code calls self.variant_store.get(&(id.clone(), variant_key)) but
id is a NodeId (u64) and implements Copy, so the .clone() is redundant; update
the call to use the copied value without cloning (e.g., pass &(id, variant_key)
or otherwise rely on id being Copy) in the variant_store.get invocation to
remove the unnecessary .clone().
In `@crates/grida-canvas/src/hittest/hit_tester.rs`:
- Around line 329-331: Update the doc comment for the helper in hit_tester.rs
(the function documented as walking the parent chain, e.g.,
is_ancestor_selected) to remove the claim “amortized O(1)” and instead state
that the operation is O(depth) per call (but typically short when parents are
selected before their children); keep the rest of the description unchanged so
callers understand it walks up ancestors and stops on the first match.
In `@crates/grida-canvas/src/node/repository.rs`:
- Around line 86-94: The FromIterator impl for NodeRepository currently discards
incoming NodeId values by calling repo.insert(node) which generates new IDs;
update it to preserve IDs by using the repository method that inserts with a
specified id (e.g., call repo.insert_with_id(id, node) or
repo.insert_with_id((node_id, node)) matching your API) so incoming (NodeId,
Node) pairs keep their original IDs, or if discarding is intentional, update the
NodeRepository::from_iter impl's doc comment to explicitly state that IDs are
re-generated; locate the impl FromIterator<(NodeId, Node)> for NodeRepository
and either replace repo.insert(node) with the insert_with_id variant using the
loop variable id or add a doc comment explaining the behavior.
In `@crates/grida-canvas/src/resources/mod.rs`:
- Around line 144-146: The TODO about matching every Node variant should be
replaced by adding a centralized schema-level paint accessor (e.g., implement a
Node::paint_slices() or a visitor) and use that from this module instead of
per-variant matching; locate the per-variant extraction logic in
resources::mod.rs (the block guarded by #[allow(dead_code)] and the matching
list spanning the following lines) and refactor it to call the new
Node::paint_slices() (or a PaintVisitor) so any future paint-bearing Node only
needs to update the accessor rather than every consumer.
In `@docs/wg/research/chromium/node-data-layout.md`:
- Around line 37-47: Add language specifiers to the fenced code blocks showing
the C++-style structures and the ASCII flow diagram: mark the blocks that show
"SVGElement" and "SVGElementRareData" (and the other C++ structure examples
around the same area) with ```cpp to enable C++ syntax highlighting, and mark
the ASCII flow diagram containing "Main Thread → commit → Pending Tree →
activation → Active Tree" with ```text so it renders as plain text; locate the
fences around the blocks that start with "SVGElement (always allocated):" and
the other similar structure examples and update their opening backticks
accordingly.
In `@editor/grida-canvas/editor.ts`:
- Around line 3249-3259: The tSwitch timestamp is always set even when no scene
switch occurs, producing a misleading non-zero "switch" time in the console.log;
change the logic so tSwitch (and thus switch duration) is only measured when a
scene switch actually runs (i.e., when targetScene is present and you perform
the switch), and in the log compute switchMs conditionally (e.g., set switchMs =
targetScene ? (tSwitch - tLoad) : 0 or omit it) so the printed `switch=` value
is zero/absent when no switch happened; update references around tSwitch,
targetScene, and the console.log that surrounds surface.redraw() accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 71395483-06ee-42b3-9e7f-0e507e358069
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockcrates/grida-canvas-wasm/lib/bin/grida_canvas_wasm.wasmis excluded by!**/*.wasm
📒 Files selected for processing (60)
.agents/skills/cg-perf/SKILL.mdcrates/grida-canvas-wasm/lib/__test__/bench-load-scene.test.tscrates/grida-canvas-wasm/lib/bin/grida-canvas-wasm.jscrates/grida-canvas-wasm/lib/modules/canvas-bindings.d.tscrates/grida-canvas-wasm/lib/modules/canvas.tscrates/grida-canvas-wasm/lib/modules/ffi.tscrates/grida-canvas-wasm/package.jsoncrates/grida-canvas-wasm/src/wasm_application.rscrates/grida-canvas/Cargo.tomlcrates/grida-canvas/examples/tool_io_grida.rscrates/grida-canvas/src/cache/atlas/atlas.rscrates/grida-canvas/src/cache/atlas/atlas_set.rscrates/grida-canvas/src/cache/compositor/cache.rscrates/grida-canvas/src/cache/fast_hash.rscrates/grida-canvas/src/cache/geometry.rscrates/grida-canvas/src/cache/mod.rscrates/grida-canvas/src/cache/paragraph.rscrates/grida-canvas/src/cache/picture.rscrates/grida-canvas/src/cache/scene.rscrates/grida-canvas/src/cache/vector_path.rscrates/grida-canvas/src/cg/types.rscrates/grida-canvas/src/hittest/hit_tester.rscrates/grida-canvas/src/io/io_grida_fbs.rscrates/grida-canvas/src/layout/cache.rscrates/grida-canvas/src/layout/engine.rscrates/grida-canvas/src/layout/tree.rscrates/grida-canvas/src/node/repository.rscrates/grida-canvas/src/node/scene_graph.rscrates/grida-canvas/src/node/schema.rscrates/grida-canvas/src/os/emscripten.rscrates/grida-canvas/src/painter/layer.rscrates/grida-canvas/src/painter/painter.rscrates/grida-canvas/src/resources/mod.rscrates/grida-canvas/src/runtime/effect_tree.rscrates/grida-canvas/src/runtime/frame_loop.rscrates/grida-canvas/src/runtime/scene.rscrates/grida-canvas/src/surface/state.rscrates/grida-canvas/src/sys/mod.rscrates/grida-canvas/src/window/application.rscrates/grida-canvas/src/window/application_emscripten.rscrates/grida-canvas/tests/compositor_effects.rscrates/grida-dev/Cargo.tomlcrates/grida-dev/src/bench/load_bench.rscrates/grida-dev/src/bench/runner.rscrates/grida-dev/src/main.rscrates/grida-dev/src/platform/native_application.rscrates/grida-dev/src/platform/native_demo.rscrates/grida-dev/src/platform/winit.rsdocs/wg/feat-2d/optimization.mddocs/wg/feat-2d/wasm-benchmarking.mddocs/wg/feat-2d/wasm-load-scene-optimization.mddocs/wg/research/chromium/index.mddocs/wg/research/chromium/node-data-layout.mdeditor/grida-canvas/editor.tseditor/scaffolds/embed/use-refig-editor.tspackages/grida-canvas-io-figma/__bench__/fig2grida.bench.tspackages/grida-canvas-io-figma/fig2grida-core.tspackages/grida-canvas-io-figma/lib.tspackages/grida-canvas-io/format.tspackages/grida-canvas-io/index.ts
💤 Files with no reviewable changes (1)
- crates/grida-canvas/src/cg/types.rs
| // ── 4. Single-pass consume: node_pairs + node_names + position_map ─────── | ||
| // | ||
| // Moves nodes out of node_entries — no cloning of Node enums. | ||
| let mut node_pairs: Vec<(crate::node::id::NodeId, Node)> = | ||
| Vec::with_capacity(node_entries.len()); | ||
| let mut node_names: Vec<(crate::node::id::NodeId, String)> = Vec::new(); | ||
| let mut position_map: FxHashMap<crate::node::id::NodeId, String> = | ||
| FxHashMap::with_capacity_and_hasher(node_entries.len(), Default::default()); | ||
|
|
||
| for e in node_entries.into_iter() { | ||
| if let Some(pos) = e.position { | ||
| position_map.insert(e.internal_id, pos); | ||
| } | ||
| if let Some(name) = e.name { | ||
| node_names.push((e.internal_id, name)); | ||
| } | ||
| node_pairs.push((e.internal_id, e.node)); | ||
| } |
There was a problem hiding this comment.
Use the pre-extracted SceneGraph path here.
This consume loop only keeps node_pairs, node_names, and position_map, and both scene-building branches still call SceneGraph::new_from_snapshot(). That means extract_geo_data() / extract_layer_core() immediately walk every node again inside SceneGraph, so the new new_from_snapshot_preextracted() fast path never helps the main .grida load path. Please extract geo_data, layer_core, and has_flex while node_entries is being consumed and build the graph from those precomputed records.
Also applies to: 464-490
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/io/io_grida_fbs.rs` around lines 406 - 423, The loop
over node_entries currently only builds node_pairs/node_names/position_map but
still lets both branches call SceneGraph::new_from_snapshot(), negating the new
fast path; while consuming node_entries also compute and collect geo_data,
layer_core and has_flex (by invoking extract_geo_data/extract_layer_core logic
on each e.node) and store them into the same preextracted structures, then
replace later calls to SceneGraph::new_from_snapshot() with
SceneGraph::new_from_snapshot_preextracted(node_pairs, node_names, position_map,
geo_data, layer_core, has_flex) so the graph is built from the precomputed
records instead of walking nodes again (update the identical logic applied in
the other branch around the same region as well).
| // For non-flex containers (LayoutMode::Normal), children are | ||
| // positioned via schema coordinates, not flex layout. Create | ||
| // this container as a Taffy leaf — it only needs its own size | ||
| // for its parent's flex computation. Children will be handled | ||
| // by extract_all_layouts using schema positions directly. | ||
| // | ||
| // This is the critical optimization for Figma imports: most | ||
| // containers use Normal mode, so we skip building Taffy nodes | ||
| // for their entire subtrees (~90%+ of nodes). | ||
| if !lc.is_flex { | ||
| return self.tree.new_leaf(*node_id, style).ok(); |
There was a problem hiding this comment.
Keep descending under LayoutMode::Normal containers to find nested flex subtrees.
Returning a leaf on Lines 359-360 stops Taffy discovery for the entire subtree. That breaks mixed documents where a manual-positioned container contains an auto-layout container deeper down: the inner flex container and its children now fall back to schema positions and never get Taffy layout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/layout/engine.rs` around lines 350 - 360, The current
early return when encountering a non-flex container (lc.is_flex /
LayoutMode::Normal) stops traversal and prevents finding nested flex subtrees;
instead, still create the Taffy leaf for the current node via
self.tree.new_leaf(*node_id, style) to preserve parent's flex-size behavior, but
do NOT return immediately—iterate into the node's children and recurse (the same
routine that eventually calls extract_all_layouts) so any deeper
auto-layout/flex containers are discovered and added to Taffy; keep the existing
leaf creation logic but remove the premature return and add a child traversal
step so nested flex trees are handled.
| /// Expand this inflation to include another inflation (per-side max). | ||
| pub fn expand(&self, other: &Self) -> Self { | ||
| Self { | ||
| top: self.top.max(other.top), | ||
| right: self.right.max(other.right), | ||
| bottom: self.bottom.max(other.bottom), | ||
| left: self.left.max(other.left), | ||
| } | ||
| } |
There was a problem hiding this comment.
Don’t compose stroke outsets and filter outsets with max().
expand_inflation_with_effects() currently unions the base stroke inflation with blur/shadow inflation via RenderBoundsInflation::expand(). That undercounts stroked + filtered nodes, because the filters run on the already-stroked source graphic. A blurred outside stroke should grow by stroke + blur, not max(stroke, blur), otherwise cached bounds become too small and content can be clipped.
Also applies to: 301-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/node/scene_graph.rs` around lines 98 - 106, The
current union of stroke and filter outsets uses RenderBoundsInflation::expand
(which applies per-side max), causing underestimation when filters apply to
already-stroked geometry; change the combination in
expand_inflation_with_effects (and the same logic referenced at the other block)
to add the per-side inflations instead of taking the max so that a blurred
outside stroke becomes stroke + blur (i.e., compute top/right/bottom/left as
stroke_side + filter_side), or add a new method like
RenderBoundsInflation::additive_merge and use that where stroke and filter
outsets must compose additively rather than with max.
| // 5a. Overlay-only fast path | ||
| // | ||
| // When neither scene data nor the camera changed (e.g. marquee drag, | ||
| // hover highlight, selection change), the content layer is identical | ||
| // to the previous frame. Restore it from the pan image cache and | ||
| // skip the expensive frame-plan build + full draw. The overlay is | ||
| // still re-drawn below so marquee/selection visuals update correctly. | ||
| if !stable && !content_changed && self.renderer.blit_content_cache() { | ||
| // Consume the camera change (no-op here, but keeps the contract). | ||
| self.renderer.camera.consume_change(); | ||
|
|
||
| // Draw devtools overlays on top of the restored content. | ||
| let _overlay_time = self.draw_and_flush_devtools_overlay(); | ||
|
|
||
| // Complete frame in the loop. | ||
| self.frame_loop.complete(quality); | ||
| self.last_frame_time = __frame_start; | ||
| return true; |
There was a problem hiding this comment.
Gate the blit fast path on quality, not stable.
Because stable is promoted to true whenever camera_change.any_changed() is false, overlay-only invalidations like hover, marquee, and selection never satisfy !stable and always fall through to the expensive full draw path. If this branch is meant to cover unchanged-content frames, it should exclude only the real settle frame.
Suggested fix
- if !stable && !content_changed && self.renderer.blit_content_cache() {
+ if quality != FrameQuality::Stable
+ && !content_changed
+ && self.renderer.blit_content_cache()
+ {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 5a. Overlay-only fast path | |
| // | |
| // When neither scene data nor the camera changed (e.g. marquee drag, | |
| // hover highlight, selection change), the content layer is identical | |
| // to the previous frame. Restore it from the pan image cache and | |
| // skip the expensive frame-plan build + full draw. The overlay is | |
| // still re-drawn below so marquee/selection visuals update correctly. | |
| if !stable && !content_changed && self.renderer.blit_content_cache() { | |
| // Consume the camera change (no-op here, but keeps the contract). | |
| self.renderer.camera.consume_change(); | |
| // Draw devtools overlays on top of the restored content. | |
| let _overlay_time = self.draw_and_flush_devtools_overlay(); | |
| // Complete frame in the loop. | |
| self.frame_loop.complete(quality); | |
| self.last_frame_time = __frame_start; | |
| return true; | |
| // 5a. Overlay-only fast path | |
| // | |
| // When neither scene data nor the camera changed (e.g. marquee drag, | |
| // hover highlight, selection change), the content layer is identical | |
| // to the previous frame. Restore it from the pan image cache and | |
| // skip the expensive frame-plan build + full draw. The overlay is | |
| // still re-drawn below so marquee/selection visuals update correctly. | |
| if quality != FrameQuality::Stable | |
| && !content_changed | |
| && self.renderer.blit_content_cache() | |
| { | |
| // Consume the camera change (no-op here, but keeps the contract). | |
| self.renderer.camera.consume_change(); | |
| // Draw devtools overlays on top of the restored content. | |
| let _overlay_time = self.draw_and_flush_devtools_overlay(); | |
| // Complete frame in the loop. | |
| self.frame_loop.complete(quality); | |
| self.last_frame_time = __frame_start; | |
| return true; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-canvas/src/window/application.rs` around lines 1159 - 1176, The
fast-path currently checks !stable but stable is set true whenever
camera_change.any_changed() is false, so overlay-only invalidations never hit
the blit cache; change the branch to gate on the frame quality instead of
stable: in the if at the start of the block that currently reads "if !stable &&
!content_changed && self.renderer.blit_content_cache()", replace the "!stable"
check with a check against the settle/real-settle quality (e.g. "quality !=
FrameQuality::Settle" or the project's equivalent enum variant), leaving
"!content_changed && self.renderer.blit_content_cache()" and the existing body
(self.renderer.camera.consume_change(), draw_and_flush_devtools_overlay(),
self.frame_loop.complete(quality), self.last_frame_time = __frame_start, return
true) intact so overlay-only frames use the blit fast path except for the actual
settle frame.
| if now_ms >= next_scroll_ms { | ||
| let zoom = renderer.camera.get_zoom(); | ||
| renderer.camera.translate(dx * pan_direction / zoom, 0.0); | ||
| frame_loop.invalidate(now_ms); | ||
| next_scroll_ms += scroll_interval_ms; | ||
| scroll_events_fired += 1; | ||
|
|
||
| if scroll_events_fired % 25 == 0 { | ||
| pan_direction = -pan_direction; | ||
| } |
There was a problem hiding this comment.
Catch up all overdue scroll events in the FrameLoop benchmark.
Line 693 only injects one scroll when now_ms has passed multiple scroll_interval_ms boundaries. A long stable frame will therefore drop queued input and under-drive both camera travel and cadence tracking in the exact jank cases this scenario is supposed to expose.
Suggested fix
- if now_ms >= next_scroll_ms {
- let zoom = renderer.camera.get_zoom();
- renderer.camera.translate(dx * pan_direction / zoom, 0.0);
- frame_loop.invalidate(now_ms);
- next_scroll_ms += scroll_interval_ms;
- scroll_events_fired += 1;
-
- if scroll_events_fired % 25 == 0 {
- pan_direction = -pan_direction;
- }
- }
+ while now_ms >= next_scroll_ms {
+ let event_time_ms = next_scroll_ms;
+ let zoom = renderer.camera.get_zoom();
+ renderer.camera.translate(dx * pan_direction / zoom, 0.0);
+ frame_loop.invalidate(event_time_ms);
+ next_scroll_ms += scroll_interval_ms;
+ scroll_events_fired += 1;
+
+ if scroll_events_fired % 25 == 0 {
+ pan_direction = -pan_direction;
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if now_ms >= next_scroll_ms { | |
| let zoom = renderer.camera.get_zoom(); | |
| renderer.camera.translate(dx * pan_direction / zoom, 0.0); | |
| frame_loop.invalidate(now_ms); | |
| next_scroll_ms += scroll_interval_ms; | |
| scroll_events_fired += 1; | |
| if scroll_events_fired % 25 == 0 { | |
| pan_direction = -pan_direction; | |
| } | |
| while now_ms >= next_scroll_ms { | |
| let event_time_ms = next_scroll_ms; | |
| let zoom = renderer.camera.get_zoom(); | |
| renderer.camera.translate(dx * pan_direction / zoom, 0.0); | |
| frame_loop.invalidate(event_time_ms); | |
| next_scroll_ms += scroll_interval_ms; | |
| scroll_events_fired += 1; | |
| if scroll_events_fired % 25 == 0 { | |
| pan_direction = -pan_direction; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-dev/src/bench/runner.rs` around lines 693 - 702, The current
single 'if' only injects one scroll even if now_ms has advanced past multiple
scroll_interval_ms boundaries; replace the 'if (now_ms >= next_scroll_ms) { ...
}' with a loop that repeatedly processes scroll events while now_ms >=
next_scroll_ms so you "catch up" all overdue ticks: inside the loop call
renderer.camera.translate(dx * pan_direction / zoom, 0.0),
frame_loop.invalidate(now_ms), increment next_scroll_ms by scroll_interval_ms,
increment scroll_events_fired, and flip pan_direction every 25 events
(scroll_events_fired % 25 == 0) so camera travel and cadence tracking stay
accurate under long frames.
| let stats = run_frameloop_pan_pass( | ||
| renderer, | ||
| fl_s.scroll_interval_ms, | ||
| fl_s.dx, | ||
| 2000.0, // 2 second session | ||
| ); |
There was a problem hiding this comment.
Don't hard-code a 2s FrameLoop session here.
This path ignores the frames argument that every other scenario uses, but bench-report still emits meta.frames = args.frames. That makes the new results harder to compare across runs and can unexpectedly balloon report time on large fixture sets.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-dev/src/bench/runner.rs` around lines 1484 - 1489, The call to
run_frameloop_pan_pass hard-codes a 2000.0ms session and ignores the shared
frames argument, causing mismatch with bench-report's meta.frames; replace the
literal with a duration derived from the frames argument (e.g. use
fl_s.scroll_interval_ms * frames as f64 or equivalent) so the pan session length
is computed from frames and scroll_interval_ms (keep the other args: renderer
and fl_s.dx, and reference run_frameloop_pan_pass, fl_s.scroll_interval_ms,
fl_s.dx, and frames).
| fn load_raster(path: &Path) -> Result<RasterScene> { | ||
| use cg::cg::prelude::CGColor; | ||
| use cg::cg::types::ResourceRef; | ||
| use cg::cg::types::{Paints, ResourceRef}; | ||
| use cg::node::factory::NodeFactory; | ||
| use cg::node::scene_graph::{Parent, SceneGraph}; | ||
| use cg::node::schema::{Node, Size}; | ||
| use cg::resources::hash_bytes; | ||
|
|
||
| let bytes = std::fs::read(path) | ||
| .with_context(|| format!("failed to read image file {}", path.display()))?; | ||
| let (width, height) = image_dimensions(path) | ||
| .with_context(|| format!("failed to read image dimensions {}", path.display()))?; |
There was a problem hiding this comment.
load_raster() reads the file twice and can mix file versions.
This helper reads bytes first and then reopens the path for image_dimensions(). If the file is being rewritten, the uploaded bytes and Size can come from different versions, and in the drop path both reads block the Tokio worker. Derive dimensions from the in-memory buffer instead and keep the blocking work off the async executor.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/grida-dev/src/main.rs` around lines 317 - 328, In load_raster(), avoid
reading the file twice: read the file once into bytes (the existing bytes
variable), derive image dimensions from that in-memory buffer (e.g.,
image::image_dimensions or image::io::Reader over a Cursor<&[u8]>) instead of
calling image_dimensions(path), and remove the second filesystem read; if this
function may run on the async executor, ensure the std::fs::read and any image
decoding are performed inside a blocking context (e.g.,
tokio::task::spawn_blocking) so you don't block Tokio workers—update references
to bytes, image_dimensions, and Size/RasterScene accordingly.
| ### Internal Timing | ||
|
|
||
| The WASM module emits per-stage timing via `eprintln!` using | ||
| `sys::perf_now()` (which calls `emscripten_get_now()`). The Node harness | ||
| captures stderr and parses the `[load_scene]` line to extract per-stage | ||
| breakdowns without any additional instrumentation. |
There was a problem hiding this comment.
Keep this section in sync with the actual harness.
bench-load-scene.test.ts does not currently parse [load_scene] stderr, and it auto-discovers fixtures from lib/__test__/fixtures/local/. The npx vitest run __test__/bench-load-scene.test.ts example and the hardcoded fixtures/local/perf/local/yrr-main.grida requirement here will send readers to the wrong setup.
Also applies to: 333-336
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/wg/feat-2d/wasm-benchmarking.md` around lines 324 - 329, The docs claim
the Node harness parses `[load_scene]` stderr and points to a specific fixture
and command, but the actual test harness (bench-load-scene.test.ts) does not
parse `[load_scene]`, auto-discovers fixtures from lib/__test__/fixtures/local/,
and requires different invocation; update the "Internal Timing" and the other
block (lines ~333-336) to reflect the real behavior by removing the
`[load_scene]` parsing claim, documenting that bench-load-scene.test.ts
auto-discovers fixtures under lib/__test__/fixtures/local/, and replace the
example invocation and hardcoded fixture path
(fixtures/local/perf/local/yrr-main.grida) with the correct discovery behavior
and invocation used by the test harness (bench-load-scene.test.ts).
- Introduced handling for .deck files in the Figma import process, allowing for the parsing and conversion of Figma Deck presentations. - Updated file validation and acceptance criteria to include .deck alongside .fig. - Enhanced documentation to reflect the new .deck format and its integration within the existing Figma file handling. - Implemented tests to ensure correct parsing and conversion of .deck files, verifying slide count and hierarchy preservation. - Updated relevant components to accommodate the new slide node types and their properties in the Grida document structure.
- switch_scene: clone scene instead of swap_remove to preserve loaded scenes list - geometry cache: return world_bounds for TextSpan (consistent with Leaf branch) - frame loop: fix unreachable overlay-only fast path by checking camera_change directly - scene_graph: use actual TextSpan effects instead of hardcoded default - effect_tree: only allocate shadow surface when at least one shadow is active Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/grida-canvas-io-figma/fig2grida.ts (1)
37-54:⚠️ Potential issue | 🟡 MinorThe help text still says
--infois.fig-only.Runtime validation now allows
.deck, butprintHelp()still documents--infoas.fig only. That will make users assume the new path is unsupported.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/grida-canvas-io-figma/fig2grida.ts` around lines 37 - 54, Update the help text in printHelp() to reflect that --info applies to both .fig and .deck (not only .fig); change the "Supported inputs" or the line describing --info to mention ".fig and .deck" (or remove the ".fig only" qualifier) so it matches the runtime validation that allows .deck input and avoids misleading users; locate the string literals in fig2grida.ts (the usage/help block passed to printHelp()) and edit the --info description accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@editor/grida-canvas-react/use-data-transfer.ts`:
- Around line 600-606: The toast guidance message for rejected drops still only
mentions ".fig" while the code now checks for both ".fig" and ".deck"; update
the user-facing message in the drop handler (the block with
file.name.toLowerCase().endsWith(".fig") ||
file.name.toLowerCase().endsWith(".deck") and the toast.info call) to mention
both types or dynamically reference the detected extension (e.g., "Use [File] >
[Import Figma] to import .fig or .deck files" or choose different messages for
.fig and .deck) so deck drops show helpful guidance rather than a misleading
rejection.
In `@packages/grida-canvas-io-figma/__tests__/iofigma.kiwi.fig.test.ts`:
- Around line 336-359: The test incorrectly computes unsupportedCount across the
whole file and silently returns if canvas is missing; scope the unsupported
count to this canvas and fail loudly when canvas.guid is absent. Specifically,
instead of filtering all nodeChanges, filter only descendants of the found
canvas (e.g. use the same canvasGuidStr used with countKiwiDescendants — check
nc.guid startsWith(canvasGuidStr) or use an existing isDescendant helper) when
computing unsupportedCount for unsupportedTypes, and replace `if (!canvas?.guid)
return;` with an assertion or throw (e.g. expect(canvas?.guid).toBeDefined() or
throw new Error) so broken fixtures fail the test.
In `@packages/grida-canvas-sdk-render-figma/cli.ts`:
- Around line 210-212: The CLI help/argument descriptions still advertise only
".fig" despite the new ".deck" support; update any Commander help/usage strings
and argument descriptions that mention binary input to include ".deck" as an
accepted extension (e.g., the isFig check and the help text strings near the
Commander setup around the locations referencing ".fig"—update occurrences used
in the command description/argument help so the help output and usage examples
list both ".fig" and ".deck"). Ensure both singular mentions and any examples
(lines around the isFig usage and the later help strings ~324-326) are updated
so users see ".deck" listed as supported.
---
Outside diff comments:
In `@packages/grida-canvas-io-figma/fig2grida.ts`:
- Around line 37-54: Update the help text in printHelp() to reflect that --info
applies to both .fig and .deck (not only .fig); change the "Supported inputs" or
the line describing --info to mention ".fig and .deck" (or remove the ".fig
only" qualifier) so it matches the runtime validation that allows .deck input
and avoids misleading users; locate the string literals in fig2grida.ts (the
usage/help block passed to printHelp()) and edit the --info description
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8f79ebd4-304f-427d-b1b1-28f82e8f4146
📒 Files selected for processing (13)
.ref/figma/fig2kiwi.tsdocs/wg/feat-fig/index.mdeditor/app/(embed)/embed/v1/debug/page.tsxeditor/grida-canvas-react-starter-kit/starterkit-import/from-figma.tsxeditor/grida-canvas-react/use-data-transfer.tseditor/scaffolds/embed/use-refig-editor.tsfixtures/.gitattributesfixtures/test-fig/deck/light.deckpackages/grida-canvas-io-figma/__tests__/iofigma.kiwi.fig.test.tspackages/grida-canvas-io-figma/fig-kiwi/index.tspackages/grida-canvas-io-figma/fig2grida.tspackages/grida-canvas-io-figma/lib.tspackages/grida-canvas-sdk-render-figma/cli.ts
✅ Files skipped from review due to trivial changes (4)
- fixtures/.gitattributes
- fixtures/test-fig/deck/light.deck
- editor/grida-canvas-react-starter-kit/starterkit-import/from-figma.tsx
- docs/wg/feat-fig/index.md
🚧 Files skipped from review as they are similar to previous changes (1)
- editor/scaffolds/embed/use-refig-editor.ts
| // Check for .fig/.deck files and show helpful message | ||
| if ( | ||
| file.name.toLowerCase().endsWith(".fig") || | ||
| file.name.toLowerCase().endsWith(".deck") | ||
| ) { | ||
| toast.info("Use [File] > [Import Figma] to import .fig files"); | ||
| continue; |
There was a problem hiding this comment.
Update the drop guidance to mention .deck.
This branch now catches .deck too, but the toast still tells users to import .fig files. For deck drops that reads like a rejection, not guidance.
Suggested copy change
- toast.info("Use [File] > [Import Figma] to import .fig files");
+ toast.info("Use [File] > [Import Figma] to import .fig/.deck files");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check for .fig/.deck files and show helpful message | |
| if ( | |
| file.name.toLowerCase().endsWith(".fig") || | |
| file.name.toLowerCase().endsWith(".deck") | |
| ) { | |
| toast.info("Use [File] > [Import Figma] to import .fig files"); | |
| continue; | |
| // Check for .fig/.deck files and show helpful message | |
| if ( | |
| file.name.toLowerCase().endsWith(".fig") || | |
| file.name.toLowerCase().endsWith(".deck") | |
| ) { | |
| toast.info("Use [File] > [Import Figma] to import .fig/.deck files"); | |
| continue; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@editor/grida-canvas-react/use-data-transfer.ts` around lines 600 - 606, The
toast guidance message for rejected drops still only mentions ".fig" while the
code now checks for both ".fig" and ".deck"; update the user-facing message in
the drop handler (the block with file.name.toLowerCase().endsWith(".fig") ||
file.name.toLowerCase().endsWith(".deck") and the toast.info call) to mention
both types or dynamically reference the detected extension (e.g., "Use [File] >
[Import Figma] to import .fig or .deck files" or choose different messages for
.fig and .deck) so deck drops show helpful guidance rather than a misleading
rejection.
| const canvas = nodeChanges.find( | ||
| (nc) => nc.type === "CANVAS" && !nc.internalOnly | ||
| ); | ||
| if (!canvas?.guid) return; | ||
|
|
||
| const canvasGuidStr = iofigma.kiwi.guid(canvas.guid); | ||
| const rawCount = countKiwiDescendants(canvasGuidStr, nodeChanges); | ||
|
|
||
| // Count nodes with types that are intentionally unsupported (e.g. SHAPE_WITH_TEXT) | ||
| const unsupportedTypes = new Set(["SHAPE_WITH_TEXT"]); | ||
| const unsupportedCount = nodeChanges.filter( | ||
| (nc) => nc.type && unsupportedTypes.has(nc.type) | ||
| ).length; | ||
|
|
||
| const figFile = FigImporter.parseFile(data); | ||
| const page = figFile.pages[0]; | ||
|
|
||
| let processedCount = 0; | ||
| page.rootNodes.forEach((rootNode: any) => { | ||
| processedCount += countNodes(rootNode); | ||
| }); | ||
|
|
||
| // All supported nodes must be preserved — only unsupported FigJam-crossover types may be absent | ||
| expect(processedCount).toBe(rawCount - unsupportedCount); |
There was a problem hiding this comment.
Keep the hierarchy assertion scoped to this canvas.
rawCount only walks descendants of canvas, but unsupportedCount is computed over every nodeChange in the file. If another canvas/internal canvas contains SHAPE_WITH_TEXT, this assertion will drift. Also, if (!canvas?.guid) return; turns a broken fixture into a silent pass.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/grida-canvas-io-figma/__tests__/iofigma.kiwi.fig.test.ts` around
lines 336 - 359, The test incorrectly computes unsupportedCount across the whole
file and silently returns if canvas is missing; scope the unsupported count to
this canvas and fail loudly when canvas.guid is absent. Specifically, instead of
filtering all nodeChanges, filter only descendants of the found canvas (e.g. use
the same canvasGuidStr used with countKiwiDescendants — check nc.guid
startsWith(canvasGuidStr) or use an existing isDescendant helper) when computing
unsupportedCount for unsupportedTypes, and replace `if (!canvas?.guid) return;`
with an assertion or throw (e.g. expect(canvas?.guid).toBeDefined() or throw new
Error) so broken fixtures fail the test.
| const isFig = | ||
| documentPath.toLowerCase().endsWith(".fig") || | ||
| documentPath.toLowerCase().endsWith(".deck"); |
There was a problem hiding this comment.
Update refig --help to advertise .deck inputs.
These gates now accept .deck, but the Commander description/argument/help strings later in the file still describe binary input as .fig only. That makes the new path hard to discover from the CLI itself.
Also applies to: 324-326
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/grida-canvas-sdk-render-figma/cli.ts` around lines 210 - 212, The
CLI help/argument descriptions still advertise only ".fig" despite the new
".deck" support; update any Commander help/usage strings and argument
descriptions that mention binary input to include ".deck" as an accepted
extension (e.g., the isFig check and the help text strings near the Commander
setup around the locations referencing ".fig"—update occurrences used in the
command description/argument help so the help output and usage examples list
both ".fig" and ".deck"). Ensure both singular mentions and any examples (lines
around the isFig usage and the later help strings ~324-326) are updated so users
see ".deck" listed as supported.
Summary
Comprehensive canvas rendering engine performance overhaul targeting pan/zoom smoothness, layout computation speed, and WASM runtime efficiency.
Performance
std::HashMap) with a FxHash-style multiplicative hasher across all internal caches (geometry, picture, vector_path, atlas, compositor, painter, scene). Per-lookup cost drops from ~25ns to ~3ns, yielding 5–15% improvement on pan/zoom. (perf(canvas): use FxHash for NodeId-keyed caches #600 merged)LayoutMode::Normalcontainers (~90% of Figma-imported nodes). Scene load benchmark: 25.8s → 844ms (30x).blit_content_cacherestores cached content directly.Features
run_frameloop_pan_pass) — Real-time GPU benchmark exercising the actualFrameLoopdecision path at 60fps cadence across 7 scroll-interval scenarios.scene_envelopemethod — O(1) bounding envelope of all scene content from the R-tree root.intersects_topmosthit testing — Returns shallowest ancestor nodes intersecting a rectangle, avoiding unnecessary descendant checks.emscripten_set_main_loop,emscripten_get_now,emscripten_get_device_pixel_ratio, etc.).load_scenebenchmarking — TypeScript-side bench test for measuring scene load performance.Refactors
NativeApplicationexiting flag to prevent event processing during shutdown.RefigRenderConfigextendsFactoryContext; simplified layout skip config.Docs
cg-perfSKILL.md withframeloopbenchmark details.wasm-benchmarking.mdandwasm-load-scene-optimization.mdguides.optimization.mdwith new optimization strategies.Summary by CodeRabbit
New Features
Performance
Tests & Docs