Skip to content

feat: Phase 2 (AI Core) + Phase 3 (Preview & Editing)#4

Merged
Szymon0C merged 18 commits intomainfrom
claude/gifted-satoshi
Apr 2, 2026
Merged

feat: Phase 2 (AI Core) + Phase 3 (Preview & Editing)#4
Szymon0C merged 18 commits intomainfrom
claude/gifted-satoshi

Conversation

@Szymon0C
Copy link
Copy Markdown
Owner

@Szymon0C Szymon0C commented Apr 2, 2026

Summary

Phase 2: AI Core

  • Rust backend: Hardware detection, sidecar lifecycle, AI streaming via SSE
  • Frontend stores: Zustand stores for AI state and chat state with streaming hook
  • Chat UI: ChatPanel with message bubbles, streaming display, status indicator
  • System prompts: Generation and editing prompt templates
  • Sidecar setup: Platform-specific llama-server download script

Phase 3: Preview & Editing

  • Editor store: Edit mode (view/inline/wysiwyg), viewport size, section selection
  • iframe helper script: Section detection, hover overlays, click handling, WYSIWYG contentEditable
  • iframe bridge: Typed postMessage protocol between app and preview
  • HTML section parser: Extract/replace sections by data-offpage-id
  • PreviewFrame: iframe wrapper with responsive viewport sizing
  • PreviewToolbar: Viewport toggle + edit mode toggle buttons
  • InlineEditBar: Floating prompt bar for section-level AI editing
  • Snapshot creation: Auto-snapshot after every chat edit and generation

Test plan

  • npm run build succeeds (TypeScript + Vite)
  • cargo check passes in src-tauri/
  • Viewport toggle resizes preview iframe
  • Edit mode switches between view/inline/wysiwyg
  • Section hover shows overlay in edit modes
  • Inline edit bar appears on section click
  • WYSIWYG contentEditable activates on section click
  • Snapshots created after edits

Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • AI streaming chat with real-time tokens, inline section editing, and automatic snapshots
    • Live iframe preview with viewport controls and WYSIWYG/inline edit modes
    • Hardware detection with tiered recommendations for model quantization
    • Sidecar controls (start/stop/status) exposed to the UI; chat components and stores for persistent per-project history
  • Chores

    • Installer script to fetch sidecar binaries and bundler config to treat the binary as external
  • Documentation

    • Contributor guidance and Phase 3 preview/editing design doc added

Szymon0C and others added 6 commits April 2, 2026 10:26
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add setup-sidecar.sh to download platform-specific llama-server binary
from llama.cpp releases. Configure Tauri to bundle the sidecar binary
and update .gitignore to exclude downloaded binaries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 2, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d7ae8525-0b46-47fc-94e9-2cb7f176a934

📥 Commits

Reviewing files that changed from the base of the PR and between 1a8c04c and 8645827.

📒 Files selected for processing (10)
  • scripts/setup-sidecar.sh
  • src-tauri/capabilities/default.json
  • src-tauri/src/ai.rs
  • src-tauri/src/sidecar.rs
  • src-tauri/tauri.conf.json
  • src/components/chat/ChatInput.tsx
  • src/components/chat/ChatPanel.tsx
  • src/hooks/useAiStream.ts
  • src/lib/prompts.ts
  • src/stores/projectStore.ts

Walkthrough

Adds a llama-server sidecar and streaming AI pipeline: backend sidecar lifecycle, hardware detection, and SSE-based AI streaming; Tauri plugin/config updates and a setup script for binaries; frontend stores, hooks, and UI components for chat, preview, and inline editing integrated with project snapshots.

Changes

Cohort / File(s) Summary
Sidecar Binary Setup
\.gitignore, scripts/setup-sidecar.sh, src-tauri/tauri.conf.json
Adds .gitignore entry to exclude src-tauri/binaries/; new scripts/setup-sidecar.sh downloads/extracts platform prebuilt llama-server; updates Tauri build command to run the script and marks binaries/llama-server as an external binary for bundling.
Backend Dependencies & Capabilities
src-tauri/Cargo.toml, src-tauri/capabilities/default.json
Adds Rust deps (tauri-plugin-shell, reqwest (+features), futures-util, sysinfo, tokio) and extends capabilities with a shell:allow-execute permission allowing execution of llama-server.
Sidecar Process Management
src-tauri/src/sidecar.rs
New module managing sidecar lifecycle and shared SidecarState: start_sidecar (spawn process, stream stdout/stderr events, health poll), stop_sidecar, sidecar_status. Emits sidecar-output events to frontend.
AI Streaming Command
src-tauri/src/ai.rs
New Tauri command stream_generate that posts to sidecar /v1/chat/completions with stream: true, parses SSE-style stream, emits ai-chunk events for incremental tokens, and returns final aggregated response.
Hardware Detection
src-tauri/src/hardware.rs
New hardware detection module exposing get_hardware_info: detects GPU (Metal/Nvidia/CPU) via platform commands, reads RAM/cores via sysinfo, classifies a hardware tier, and maps to a recommended quantization string.
Backend Integration
src-tauri/src/lib.rs
Registers new modules (ai, hardware, sidecar), initializes shell plugin and managed SidecarState, and routes new invoke handlers (start_sidecar, stop_sidecar, sidecar_status, stream_generate, get_hardware_info).
Frontend Chat UI
src/components/chat/...
src/components/chat/ChatInput.tsx, src/components/chat/ChatMessage.tsx, src/components/chat/ChatPanel.tsx
Adds chat input, message bubble, and ChatPanel which orchestrates messaging, auto-scroll, and triggers AI generation/edit flows and snapshot creation.
AI Hook & Prompts
src/hooks/useAiStream.ts, src/lib/prompts.ts
Adds useAiStream hook handling ai-chunk events, invoking stream_generate, and updating project HTML (including generateSection); adds SYSTEM_PROMPTS and message builder utilities for generation and edits.
Frontend Preview / Editing
src/components/preview/..., src/lib/helperScript.ts, src/lib/iframeBridge.ts, src/lib/htmlSections.ts, src/stores/editorStore.ts
Implements iframe preview bridge and injected helper script, section utilities (replaceSectionInHtml, ensureSectionId), preview toolbar/frame, inline edit bar, and editor store for edit mode/viewport/selection.
Stores & Persistence
src/stores/aiStore.ts, src/stores/chatStore.ts, src/stores/projectStore.ts, src/stores/editorStore.ts
Adds useAiStore (sidecar status/port, hardware detection), useChatStore (messages, streaming buffer, DB persistence, finalize stream), adds createSnapshot to projectStore, and introduces useEditorStore for preview/edit state.
Page Integration
src/pages/ProjectPage.tsx
Wires ChatPanel, preview components, and inline edit flows into the project page, including snapshot creation and selection clearing.
Docs & Misc
CLAUDE.md, docs/superpowers/...
Adds repository guidance in CLAUDE.md and a Phase 3 preview/editing plan document describing the preview/edit architecture and verification steps.

Sequence Diagram(s)

sequenceDiagram
    participant User as Frontend (User)
    participant Chat as ChatPanel / useAiStream
    participant Tauri as Tauri Backend
    participant Sidecar as Llama-server Sidecar
    participant DB as Database

    Note over User,Sidecar: Sidecar Startup Flow
    User->>Chat: startSidecar(modelPath)
    Chat->>Tauri: invoke('start_sidecar', {modelPath, port})
    Tauri->>Sidecar: spawn process (llama-server)
    Tauri-->>Chat: emit 'sidecar-output' (stdout/stderr)
    Tauri->>Sidecar: poll GET /health (until ready or timeout)
    Sidecar-->>Tauri: 200 OK
    Tauri-->>Chat: return running port

    Note over User,Sidecar: AI Chat Streaming Flow
    User->>Chat: send message
    Chat->>Tauri: invoke('stream_generate', {messages, systemPrompt, max_tokens})
    Tauri->>Sidecar: POST /v1/chat/completions {stream: true}
    Sidecar-->>Tauri: SSE data: ... (data: lines)
    Tauri->>Chat: emit 'ai-chunk' events for tokens
    Chat->>Chat: append tokens, render streamBuffer
    Sidecar-->>Tauri: data: [DONE]
    Tauri-->>Chat: return final aggregated response
    Chat->>DB: finalizeStream() (persist assistant message / snapshot)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

🐰 I dug a tunnel, found a binary bright,

Llama woke up and chatted through the night,
Tokens hopped in, streaming like a song,
Sections swapped and snapshots stored along,
Hooray — small rabbit, big app, all right!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.74% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately captures the main changes: Phase 2 (AI Core with sidecar, streaming, and chat) and Phase 3 (Preview & Editing with iframe components, section editing, and viewport controls).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/gifted-satoshi

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (8)
src/stores/aiStore.ts (2)

53-60: stopSidecar does not clear error state or handle failure consistently.

If stopSidecar fails, the user won't know. Also, a successful stop should clear any previous error state.

♻️ Suggested improvement
 stopSidecar: async () => {
   try {
     await invoke('stop_sidecar');
-    set({ sidecarStatus: 'stopped' });
+    set({ sidecarStatus: 'stopped', error: null });
   } catch (error) {
     console.error('Failed to stop sidecar:', error);
+    const msg = error instanceof Error ? error.message : String(error);
+    set({ sidecarStatus: 'error', error: msg });
   }
 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/stores/aiStore.ts` around lines 53 - 60, The stopSidecar function
currently only logs failures and doesn't update the store error state or clear
prior errors on success; update stopSidecar (the async function that calls
invoke('stop_sidecar')) to on success call set(...) to set sidecarStatus:
'stopped' and clear any existing error (e.g., sidecarError: null), and on catch
call set(...) to record the failure into sidecarError (include the error
message/object) so the UI can surface the failure instead of just console.error.

30-37: Inconsistent error handling: detectHardware silently swallows failures.

Unlike startSidecar which updates error state on failure, detectHardware only logs to console. Consider setting error state so the UI can inform the user if hardware detection fails.

♻️ Suggested improvement
 detectHardware: async () => {
   try {
     const info = await invoke<HardwareInfo>('get_hardware_info');
     set({ hardwareInfo: info });
   } catch (error) {
     console.error('Hardware detection failed:', error);
+    const msg = error instanceof Error ? error.message : String(error);
+    set({ error: msg });
   }
 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/stores/aiStore.ts` around lines 30 - 37, The detectHardware function
currently only logs failures to console; update its catch block to set the
store's error state (same pattern used by startSidecar) so the UI can react—use
the existing set function to call set({ error: <error message or Error object>
}) and optionally clear hardwareInfo (e.g., set({ hardwareInfo: null })) when
detection fails; ensure you capture the caught error variable (error) and pass a
useful message or the error itself to the error state.
src-tauri/src/hardware.rs (1)

61-70: Redundant match arm: both final cases return Minimum.

Lines 67-68 both match to HardwareTier::Minimum. The second arm is unreachable or redundant:

♻️ Suggested fix
 fn classify_tier(ram_gb: f64, gpu: &GpuType) -> HardwareTier {
     match gpu {
         GpuType::Metal if ram_gb >= 32.0 => HardwareTier::Optimal,
         GpuType::Nvidia { vram_gb } if ram_gb >= 32.0 && *vram_gb >= 8.0 => HardwareTier::Optimal,
         GpuType::Metal if ram_gb >= 16.0 => HardwareTier::Recommended,
         GpuType::Nvidia { vram_gb } if *vram_gb >= 6.0 => HardwareTier::Recommended,
-        _ if ram_gb >= 8.0 => HardwareTier::Minimum,
         _ => HardwareTier::Minimum,
     }
 }

If the intent was to distinguish "meets minimum RAM" vs "below minimum", consider returning a different tier or adding a BelowMinimum variant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/hardware.rs` around lines 61 - 70, The match in classify_tier
contains two final arms both returning HardwareTier::Minimum, making the second
redundant; update classify_tier to clearly distinguish systems that meet the
minimum RAM from those below it (e.g., change the last arm to return a new
HardwareTier::BelowMinimum variant or remove the duplicate and make the final
fallback return BelowMinimum), and if you add a new variant update the
HardwareTier enum and any match exhaustiveness sites (refer to classify_tier and
HardwareTier when making the change).
src-tauri/src/ai.rs (2)

90-103: Silent parse failures may hide SSE format issues.

When serde_json::from_str fails, the chunk is silently skipped. Consider logging parse failures in debug builds to aid troubleshooting:

♻️ Suggested improvement
-            if let Ok(parsed) = serde_json::from_str::<ChatChunkResponse>(data) {
+            match serde_json::from_str::<ChatChunkResponse>(data) {
+                Ok(parsed) => {
                     for choice in &parsed.choices {
                         if let Some(content) = &choice.delta.content {
                             full_response.push_str(content);
                             let _ = app.emit(
                                 "ai-chunk",
                                 AiChunk {
                                     token: content.clone(),
                                     done: choice.finish_reason.is_some(),
                                 },
                             );
                         }
                     }
+                }
+                Err(e) => {
+                    #[cfg(debug_assertions)]
+                    eprintln!("Failed to parse SSE chunk: {} - data: {}", e, data);
+                }
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/ai.rs` around lines 90 - 103, The code silently ignores JSON
parse errors when calling serde_json::from_str::<ChatChunkResponse>(data);
update the parsing branch around that call (the block building full_response and
emitting "ai-chunk") to log parse failures in debug builds: capture the Err(e)
from serde_json::from_str and call debug!(...) or eprintln! (depending on
project logging conventions) with the error and the raw data string so
developers can inspect malformed SSE chunks; keep the existing successful-path
behavior that pushes to full_response and emits AiChunk on parsed.choices.

47-53: No timeout configured for the HTTP request.

If the sidecar becomes unresponsive, this request will block indefinitely. Consider adding a timeout:

♻️ Suggested fix
-    let client = reqwest::Client::new();
+    let client = reqwest::Client::builder()
+        .timeout(std::time::Duration::from_secs(300))
+        .build()
+        .map_err(|e| format!("Failed to create client: {}", e))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/ai.rs` around lines 47 - 53, The request uses
reqwest::Client::new() with no timeout, which can hang if the sidecar is
unresponsive; replace the client construction with reqwest::Client::builder()
and set a sensible timeout (e.g., Duration::from_secs(10)) via .timeout(...)
before calling .build(), and ensure the Duration type is imported; update the
code that currently calls reqwest::Client::new() and the subsequent
.post(...).json(...).send().await path to use the newly built client with the
configured timeout.
src/hooks/useAiStream.ts (1)

58-61: On error, streamBuffer may contain partial content that isn't cleared.

When generation fails mid-stream, the buffer retains partial tokens. Consider clearing the buffer on error to avoid displaying stale/incomplete content on the next attempt.

♻️ Suggested fix
     } catch (error) {
       console.error('AI generation failed:', error);
       setStreaming(false);
+      // Consider adding a clearStreamBuffer action or passing false to trigger reset
       return null;
     } finally {

In chatStore.ts, setStreaming(false) already preserves the buffer. You may want to add a dedicated clearStreamBuffer action or adjust setStreaming(false) behavior for error scenarios.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/hooks/useAiStream.ts` around lines 58 - 61, The catch block in
useAiStream.ts currently leaves streamBuffer populated when an error occurs; add
logic to clear the partial buffer there by invoking a new or existing action
(e.g., call chatStore.clearStreamBuffer() or change
chatStore.setStreaming(false, { clearBuffer: true })) so the partial tokens are
removed on error, while still calling setStreaming(false) to stop streaming; if
clearStreamBuffer doesn't exist, add it to chatStore.ts (or extend setStreaming
to accept a clear-on-error flag) and ensure useAiStream.ts calls that action in
the catch branch.
src/stores/chatStore.ts (1)

27-34: Edge case: loadMessages replaces the entire messages array.

If the user switches projects while streaming, loadMessages will be called (via ChatPanel useEffect), replacing messages with the new project's history. The streamBuffer persists independently, but finalizeStream will then save the streamed content to the wrong project if the user switched mid-stream.

Consider guarding against this in ChatPanel by disabling project switches during streaming or aborting the stream on project change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/stores/chatStore.ts` around lines 27 - 34, The loadMessages function
replaces the entire messages array which can mismatch with an ongoing stream: if
a user switches projects while streaming, streamBuffer and finalizeStream may
persist and save into the wrong project. Fix this by ensuring ChatPanel's
useEffect (which calls loadMessages) first checks the chatStore streaming state
(or exposes an isStreaming flag) and either disable project switches during
streaming or abort the active stream before calling loadMessages; alternatively,
have ChatPanel call an exposed abortStream/clearStreamBuffer method on the store
(referencing streamBuffer, finalizeStream, loadMessages, ChatPanel useEffect) to
cancel the stream and clear streamBuffer when projectId changes so streamed
content cannot be saved to the wrong project.
src/lib/prompts.ts (1)

35-45: Consider stricter typing for the role field.

Using string for role loses type safety. A union type ensures only valid roles are used.

✨ Proposed type improvement
+type ChatRole = 'user' | 'assistant' | 'system';
+type ChatMessage = { role: ChatRole; content: string };
+
 export function buildGenerateMessages(
   userPrompt: string,
   siteType: string
-): Array<{ role: string; content: string }> {
+): ChatMessage[] {
   return [
     {
       role: 'user',
       content: `Create a ${siteType} website: ${userPrompt}`,
     },
   ];
 }

Apply the same ChatMessage type to the other builder functions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/prompts.ts` around lines 35 - 45, The buildGenerateMessages function
currently types roles as a plain string; define a specific ChatMessage type
(e.g., type ChatMessage = { role: 'user' | 'assistant' | 'system'; content:
string }) and change buildGenerateMessages's return type to Array<ChatMessage>
(and its internal role value to the 'user' literal), then update other message
builder functions to use the same ChatMessage type so only valid roles are
allowed across the codebase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/setup-sidecar.sh`:
- Line 8: The TARGET assignment uses an invalid rustc --print option; update the
assignment that sets TARGET to use rustc --print host-tuple (instead of
host-triple) and keep the existing fallback to "unknown", and add a secondary
fallback that parses rustc -Vv output when --print host-tuple is not supported;
specifically modify the TARGET variable assignment and any related logic that
checks TARGET so it first tries `rustc --print host-tuple 2>/dev/null`, then if
that fails attempts parsing `rustc -Vv` for the host triple, and only then falls
back to "unknown" to avoid triggering the unsupported-platform branch.

In `@src-tauri/capabilities/default.json`:
- Around line 11-12: The permissions in default.json currently grant broad shell
access via "shell:allow-spawn" and "shell:allow-execute"; restrict them to only
the llama sidecar by replacing the bare permissions with scoped shell
permissions that include the sidecar name used in your code (llama-server) —
align the permission "name" value with the sidecar identifier referenced in
sidecar.rs where you call .sidecar("llama-server").spawn(), and remove the
global execute/spawn entries so only the scoped permission remains.

In `@src-tauri/src/ai.rs`:
- Around line 29-45: The stream_generate command currently only accepts messages
but must also accept the frontend's systemPrompt and maxTokens; update the
stream_generate function signature to include system_prompt: Option<String> (or
String) and max_tokens: Option<u32>, then build the request body so it prepends
the system prompt to the messages array (or inserts a message with role "system"
using system_prompt) and include "max_tokens": max_tokens in the JSON body sent
from the body variable; ensure you update usage of messages and body
construction in stream_generate and handle None values safely when constructing
the final messages list.

In `@src-tauri/src/sidecar.rs`:
- Around line 111-131: The health-check timeout path currently returns an error
but leaves the spawned child process in SidecarState, blocking future starts;
modify the timeout branch in the health-poll loop (inside the start_sidecar
async flow where health_url, client, start, timeout are defined) to perform
cleanup: take the SidecarState.child (e.g. let mut child_opt =
sidecar_state.child.take()), and if Some(child) then attempt to kill and await
the child (child.kill() and child.wait().await or equivalent for the child
type), handling/logging errors, ensuring sidecar_state.child remains None, then
return the timeout Err; this ensures the spawned process is terminated and state
is cleared on health-check timeout.
- Around line 96-109: The monitoring task handling CommandEvent::Terminated
should clear SidecarState.child so sidecar_status no longer reports running:
change SidecarState.child to be shareable (wrap it in Arc<Mutex<...>> or make
SidecarState itself Arc) and pass that Arc into the spawned monitor task, then
when matching CommandEvent::Terminated set state.child = None (acquire the
mutex) before emitting the "sidecar-output" event and breaking; update any
constructors/usages of SidecarState/sidecar_status to use the new Arc<Mutex>
type accordingly.
- Around line 34-69: The current TOCTOU happens because you release the Mutex
from state.child between checking child.is_some() and assigning the spawned
process, allowing concurrent starts; fix by making the check-and-set atomic:
either hold the same lock across the spawn call so no other task can pass the
check (wrap the sidecar().spawn() while holding state.child.lock()), or use a
reservation pattern where you acquire state.child.lock(), check is_some(), set a
temporary placeholder/enum (e.g., Starting) to mark that a start is in progress,
drop the lock, perform sidecar("llama-server").spawn(), then reacquire the lock
and replace the placeholder with the real child (or clear it on spawn error);
update references to state.child.lock(), sidecar("llama-server"), spawn(),
rx/child accordingly.

In `@src-tauri/tauri.conf.json`:
- Line 35: The tauri bundling fails because the configured externalBin
("externalBin") expects src-tauri/binaries/llama-server-{TARGET} to exist but
the sidecar setup script is not run; update tauri.conf.json to run the sidecar
setup before building by adding a beforeBuildCommand that invokes
scripts/setup-sidecar.sh (so the binary is created prior to bundling), or
alternatively add a clear README prerequisite instructing developers/CI to run
scripts/setup-sidecar.sh before invoking npm run build / npm run dev; ensure you
modify the beforeBuildCommand key in tauri.conf.json (or document the
requirement) so the external binary exists when tauri bundles.

In `@src/components/chat/ChatInput.tsx`:
- Around line 33-48: The textarea currently strips the browser focus ring via
"outline-none" and the send button uses an ambiguous "↑" label; restore an
accessible focus state by removing or replacing "outline-none" on the textarea
with a visible focus style (e.g., add a focus / focus-visible ring class such as
focus-visible:ring-2 focus-visible:ring-[var(--color-accent)] or similar) and
make the send button meaningful to assistive tech by replacing the lone arrow
with a textual label (e.g., "Send") or keep the arrow visually but add an
aria-label="Send" and/or an offscreen <span> for screen readers; ensure the
button still uses the existing disabled logic (disabled || !value.trim()) and
the click handler handleSend and keyboard handler handleKeyDown remain
unchanged.
- Around line 23-27: The Enter key handler in handleKeyDown currently submits on
Enter without guarding IME composition; update handleKeyDown in ChatInput (the
KeyboardEvent handler) to return early if e.nativeEvent?.isComposing is true so
that Enter during IME composition does not trigger handleSend; keep the existing
checks for e.key === 'Enter' and !e.shiftKey and only call e.preventDefault()
and handleSend() when not composing.

In `@src/components/chat/ChatPanel.tsx`:
- Line 37: The code computes hasHtml via currentProject.html.length which can
throw if currentProject.html is null/undefined; update the check in ChatPanel
(where hasHtml is defined) to safely handle absent html by testing for a truthy
array/string or using Array.isArray/currentProject.html != null before accessing
length (e.g., compute hasHtml = !!currentProject.html &&
currentProject.html.length > 0 or equivalent), and ensure any downstream logic
that relies on hasHtml still works with the new boolean.

In `@src/hooks/useAiStream.ts`:
- Around line 46-52: The frontend is calling invoke('stream_generate', {
systemPrompt, port: sidecarPort, messages, maxTokens }) but the backend command
stream_generate (in ai.rs) only deserializes messages, so systemPrompt,
maxTokens and port are dropped; fix by updating the backend stream_generate
handler to accept/deserialise the same parameter names (systemPrompt or
system_prompt, maxTokens/max_tokens, and port) and then forward systemPrompt and
maxTokens into the API request (and use the provided port if you want to
override SidecarState), or alternatively align the frontend to only send the
exact fields the backend expects — locate the invoke call
(invoke('stream_generate', ...)) and the stream_generate command in ai.rs and
make their parameter names and usages consistent so system prompt and max tokens
are actually forwarded to the API.

In `@src/lib/prompts.ts`:
- Around line 47-66: buildEditMessages is duplicating the just-stored user
message because addMessage stores the prompt and chatHistory (used as
recentHistory via chatHistory.slice(-6)) already contains that user entry;
update buildEditMessages to filter out any recentHistory entries whose content
exactly equals the provided userPrompt (or otherwise skip the last element when
messages.slice(-6) would include the same user message) before pushing
recentHistory and finally pushing { role: 'user', content: userPrompt };
reference buildEditMessages, chatHistory, recentHistory, messages.slice(-6), and
userPrompt when making the change.

In `@src/stores/projectStore.ts`:
- Around line 104-111: The current createSnapshot flow reads MAX(version) via
db.select and then inserts, which races when concurrent calls run; wrap the
version allocation and insert in a single database transaction (or use a single
atomic INSERT ... SELECT that computes COALESCE(MAX(version),0)+1 for the given
project_id) so two concurrent createSnapshot() calls cannot pick the same
nextVersion. Modify the code around the db.select and db.execute in
createSnapshot (or the surrounding function) to start a transaction, re-read
MAX(version) with a FOR UPDATE/locked read if your DB supports it, compute
nextVersion inside the transaction, perform the INSERT, and commit (or replace
with a single locked INSERT ... SELECT statement) to ensure atomic version
assignment.

---

Nitpick comments:
In `@src-tauri/src/ai.rs`:
- Around line 90-103: The code silently ignores JSON parse errors when calling
serde_json::from_str::<ChatChunkResponse>(data); update the parsing branch
around that call (the block building full_response and emitting "ai-chunk") to
log parse failures in debug builds: capture the Err(e) from serde_json::from_str
and call debug!(...) or eprintln! (depending on project logging conventions)
with the error and the raw data string so developers can inspect malformed SSE
chunks; keep the existing successful-path behavior that pushes to full_response
and emits AiChunk on parsed.choices.
- Around line 47-53: The request uses reqwest::Client::new() with no timeout,
which can hang if the sidecar is unresponsive; replace the client construction
with reqwest::Client::builder() and set a sensible timeout (e.g.,
Duration::from_secs(10)) via .timeout(...) before calling .build(), and ensure
the Duration type is imported; update the code that currently calls
reqwest::Client::new() and the subsequent .post(...).json(...).send().await path
to use the newly built client with the configured timeout.

In `@src-tauri/src/hardware.rs`:
- Around line 61-70: The match in classify_tier contains two final arms both
returning HardwareTier::Minimum, making the second redundant; update
classify_tier to clearly distinguish systems that meet the minimum RAM from
those below it (e.g., change the last arm to return a new
HardwareTier::BelowMinimum variant or remove the duplicate and make the final
fallback return BelowMinimum), and if you add a new variant update the
HardwareTier enum and any match exhaustiveness sites (refer to classify_tier and
HardwareTier when making the change).

In `@src/hooks/useAiStream.ts`:
- Around line 58-61: The catch block in useAiStream.ts currently leaves
streamBuffer populated when an error occurs; add logic to clear the partial
buffer there by invoking a new or existing action (e.g., call
chatStore.clearStreamBuffer() or change chatStore.setStreaming(false, {
clearBuffer: true })) so the partial tokens are removed on error, while still
calling setStreaming(false) to stop streaming; if clearStreamBuffer doesn't
exist, add it to chatStore.ts (or extend setStreaming to accept a clear-on-error
flag) and ensure useAiStream.ts calls that action in the catch branch.

In `@src/lib/prompts.ts`:
- Around line 35-45: The buildGenerateMessages function currently types roles as
a plain string; define a specific ChatMessage type (e.g., type ChatMessage = {
role: 'user' | 'assistant' | 'system'; content: string }) and change
buildGenerateMessages's return type to Array<ChatMessage> (and its internal role
value to the 'user' literal), then update other message builder functions to use
the same ChatMessage type so only valid roles are allowed across the codebase.

In `@src/stores/aiStore.ts`:
- Around line 53-60: The stopSidecar function currently only logs failures and
doesn't update the store error state or clear prior errors on success; update
stopSidecar (the async function that calls invoke('stop_sidecar')) to on success
call set(...) to set sidecarStatus: 'stopped' and clear any existing error
(e.g., sidecarError: null), and on catch call set(...) to record the failure
into sidecarError (include the error message/object) so the UI can surface the
failure instead of just console.error.
- Around line 30-37: The detectHardware function currently only logs failures to
console; update its catch block to set the store's error state (same pattern
used by startSidecar) so the UI can react—use the existing set function to call
set({ error: <error message or Error object> }) and optionally clear
hardwareInfo (e.g., set({ hardwareInfo: null })) when detection fails; ensure
you capture the caught error variable (error) and pass a useful message or the
error itself to the error state.

In `@src/stores/chatStore.ts`:
- Around line 27-34: The loadMessages function replaces the entire messages
array which can mismatch with an ongoing stream: if a user switches projects
while streaming, streamBuffer and finalizeStream may persist and save into the
wrong project. Fix this by ensuring ChatPanel's useEffect (which calls
loadMessages) first checks the chatStore streaming state (or exposes an
isStreaming flag) and either disable project switches during streaming or abort
the active stream before calling loadMessages; alternatively, have ChatPanel
call an exposed abortStream/clearStreamBuffer method on the store (referencing
streamBuffer, finalizeStream, loadMessages, ChatPanel useEffect) to cancel the
stream and clear streamBuffer when projectId changes so streamed content cannot
be saved to the wrong project.
🪄 Autofix (Beta)

✅ Autofix completed


ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8711e73f-4759-41ec-a500-a114899657be

📥 Commits

Reviewing files that changed from the base of the PR and between abdc2bb and 17d31b5.

⛔ Files ignored due to path filters (1)
  • src-tauri/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (18)
  • .gitignore
  • scripts/setup-sidecar.sh
  • src-tauri/Cargo.toml
  • src-tauri/capabilities/default.json
  • src-tauri/src/ai.rs
  • src-tauri/src/hardware.rs
  • src-tauri/src/lib.rs
  • src-tauri/src/sidecar.rs
  • src-tauri/tauri.conf.json
  • src/components/chat/ChatInput.tsx
  • src/components/chat/ChatMessage.tsx
  • src/components/chat/ChatPanel.tsx
  • src/hooks/useAiStream.ts
  • src/lib/prompts.ts
  • src/pages/ProjectPage.tsx
  • src/stores/aiStore.ts
  • src/stores/chatStore.ts
  • src/stores/projectStore.ts

Comment thread scripts/setup-sidecar.sh Outdated
Comment thread src-tauri/capabilities/default.json Outdated
Comment thread src-tauri/src/ai.rs
Comment thread src-tauri/src/sidecar.rs Outdated
Comment on lines +34 to +69
// Check if already running
{
let child = state.child.lock().map_err(|e| e.to_string())?;
if child.is_some() {
return Err("Sidecar is already running".to_string());
}
}

let port = port.unwrap_or(8080);

let (mut rx, child) = app
.shell()
.sidecar("llama-server")
.map_err(|e| format!("Failed to create sidecar command: {}", e))?
.args([
"-m",
&model_path,
"--port",
&port.to_string(),
"--host",
"127.0.0.1",
"-ngl",
"99",
])
.spawn()
.map_err(|e| format!("Failed to spawn sidecar: {}", e))?;

// Store child process and port
{
let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
*child_lock = Some(child);
}
{
let mut port_lock = state.port.lock().map_err(|e| e.to_string())?;
*port_lock = port;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

TOCTOU race condition: concurrent starts can bypass the "already running" check.

The lock is released after checking child.is_some() (line 40) and before storing the new child (line 64). Two concurrent calls can both pass the check and spawn duplicate sidecars.

Hold the lock across the spawn or use a more atomic approach:

🔒 Proposed fix using a single lock scope
-    // Check if already running
-    {
-        let child = state.child.lock().map_err(|e| e.to_string())?;
-        if child.is_some() {
-            return Err("Sidecar is already running".to_string());
-        }
-    }
-
-    let port = port.unwrap_or(8080);
-
-    let (mut rx, child) = app
+    let port = port.unwrap_or(8080);
+    
+    // Lock held across check and store to prevent race
+    let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
+    if child_lock.is_some() {
+        return Err("Sidecar is already running".to_string());
+    }
+    
+    let (mut rx, child) = app
         .shell()
         .sidecar("llama-server")
         .map_err(|e| format!("Failed to create sidecar command: {}", e))?
@@ ..snip..
         .spawn()
         .map_err(|e| format!("Failed to spawn sidecar: {}", e))?;

-    // Store child process and port
-    {
-        let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
-        *child_lock = Some(child);
-    }
+    *child_lock = Some(child);
+    drop(child_lock); // Release before async polling
+    
     {
         let mut port_lock = state.port.lock().map_err(|e| e.to_string())?;
         *port_lock = port;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Check if already running
{
let child = state.child.lock().map_err(|e| e.to_string())?;
if child.is_some() {
return Err("Sidecar is already running".to_string());
}
}
let port = port.unwrap_or(8080);
let (mut rx, child) = app
.shell()
.sidecar("llama-server")
.map_err(|e| format!("Failed to create sidecar command: {}", e))?
.args([
"-m",
&model_path,
"--port",
&port.to_string(),
"--host",
"127.0.0.1",
"-ngl",
"99",
])
.spawn()
.map_err(|e| format!("Failed to spawn sidecar: {}", e))?;
// Store child process and port
{
let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
*child_lock = Some(child);
}
{
let mut port_lock = state.port.lock().map_err(|e| e.to_string())?;
*port_lock = port;
}
let port = port.unwrap_or(8080);
// Lock held across check and store to prevent race
let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
if child_lock.is_some() {
return Err("Sidecar is already running".to_string());
}
let (mut rx, child) = app
.shell()
.sidecar("llama-server")
.map_err(|e| format!("Failed to create sidecar command: {}", e))?
.args([
"-m",
&model_path,
"--port",
&port.to_string(),
"--host",
"127.0.0.1",
"-ngl",
"99",
])
.spawn()
.map_err(|e| format!("Failed to spawn sidecar: {}", e))?;
*child_lock = Some(child);
drop(child_lock); // Release before async polling
{
let mut port_lock = state.port.lock().map_err(|e| e.to_string())?;
*port_lock = port;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/sidecar.rs` around lines 34 - 69, The current TOCTOU happens
because you release the Mutex from state.child between checking child.is_some()
and assigning the spawned process, allowing concurrent starts; fix by making the
check-and-set atomic: either hold the same lock across the spawn call so no
other task can pass the check (wrap the sidecar().spawn() while holding
state.child.lock()), or use a reservation pattern where you acquire
state.child.lock(), check is_some(), set a temporary placeholder/enum (e.g.,
Starting) to mark that a start is in progress, drop the lock, perform
sidecar("llama-server").spawn(), then reacquire the lock and replace the
placeholder with the real child (or clear it on spawn error); update references
to state.child.lock(), sidecar("llama-server"), spawn(), rx/child accordingly.

Comment thread src-tauri/src/sidecar.rs
Comment thread src/components/chat/ChatInput.tsx
Comment thread src/components/chat/ChatPanel.tsx Outdated
Comment thread src/hooks/useAiStream.ts
Comment thread src/lib/prompts.ts
Comment thread src/stores/projectStore.ts Outdated
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 2, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Fixes Applied Successfully

Fixed 10 file(s) based on 13 unresolved review comments.

A stacked PR containing fixes has been created.

  • Stacked PR: #5
  • Files modified:
  • scripts/setup-sidecar.sh
  • src-tauri/capabilities/default.json
  • src-tauri/src/ai.rs
  • src-tauri/src/sidecar.rs
  • src-tauri/tauri.conf.json
  • src/components/chat/ChatInput.tsx
  • src/components/chat/ChatPanel.tsx
  • src/hooks/useAiStream.ts
  • src/lib/prompts.ts
  • src/stores/projectStore.ts

Time taken: 3m 48s

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Szymon0C Szymon0C changed the title feat: Phase 2 — AI Core (sidecar, streaming, chat UI) feat: Phase 2 (AI Core) + Phase 3 (Preview & Editing) Apr 2, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md`:
- Around line 16-17: The markdown tables (e.g., the table starting with "| File
| Responsibility |") violate MD058; add a blank line immediately before and
after each table instance in this document so both tables are surrounded by
blank lines (this will satisfy MD058 and the docs CI).
- Around line 815-816: Edit the documentation text describing generateSection so
it uses the compound modifier "full-page HTML" (hyphenated) for clarity;
specifically update the sentence that reads about sending the section's HTML and
replacing the section in the full page HTML to use "full-page HTML" and ensure
references to the generateSection method and the editSection system prompt
remain accurate.
- Around line 934-935: previewIframeRef is declared but never attached to the
PreviewFrame and the replace message uses the stale sectionHtml instead of the
newly generated section result; attach previewIframeRef as the ref prop on the
PreviewFrame component and update all replace message payloads (the code that
sends the "replace" message to the iframe) to use the generated section result
variable (the variable that holds the new HTML after generation) instead of
sectionHtml; apply the same fix to the other occurrences noted (around the other
replace usages) so the iframe receives the fresh content via
previewIframeRef.current.postMessage.
- Around line 192-200: The postMessage usage is currently broadcasting with '*'
and your message handler accepts any message with a matching type, which is
unsafe; replace the wildcard with a strict target origin when calling
window.parent.postMessage (use a configured allowed origin) and harden the
message handler(s) that consume messages by validating event.origin against an
allowlist, ensuring event.source === window.parent (or the expected Window
reference), and performing schema/type guards on event.data (check
event.data.type and required payload fields like id, tagName, outerHtml) before
acting; update the code paths around the postMessage that sends type
'offpage:section-selected' and the corresponding message event listener(s) that
branch on currentMode/wysiwyg to enforce these origin/source checks and payload
validation.

In `@src/components/preview/InlineEditBar.tsx`:
- Around line 9-30: The component allows duplicate submissions because
handleSubmit/fire-onSubmit is fire-and-forget; add a local isSubmitting state in
InlineEditBar and guard submissions (return early if isSubmitting), set
isSubmitting = true before calling onSubmit(selectedSection.id,
selectedSection.outerHtml, text) and await it, only clear the prompt on
successful completion, and in a finally block set isSubmitting = false so
subsequent attempts are allowed; also have handleKeyDown ignore Enter when
isSubmitting and ensure clearSelection still works on Escape. Reference:
InlineEditBar, handleSubmit, handleKeyDown, onSubmit, selectedSection,
clearSelection, setPrompt.

In `@src/components/preview/PreviewFrame.tsx`:
- Around line 57-64: When handling 'offpage:ready' in PreviewFrame.tsx, after
confirming iframeRef.current, additionally request the helper-free, tagged DOM
(the version with data-offpage-id added by detectSections) from the iframe via
the existing bridge instead of only resending the mode; then persist that
returned HTML into the app state so currentProject.html contains those
data-offpage-id attributes before edits are enabled (this ensures later calls to
replaceSectionInHtml in useAiStream.ts and elsewhere are not no-ops).
Concretely: in the case 'offpage:ready' branch (where sendToIframe is used),
send a message to ask for the full tagged document, handle the response event to
receive the HTML string, and update the project store/currentProject.html with
that payload prior to allowing inline/WYSIWYG edits or calling
replaceSectionInHtml.
- Around line 45-54: The handler for 'offpage:wysiwyg-done' is
fire-and-forgetting the async updateProjectHtml call; change it to handle
promise rejections by awaiting or chaining a .catch on updateProjectHtml (in
PreviewFrame.tsx) and react to failures (log via your logger, surface an error
to the UI, and/or revert the iframe edit by reloading currentProject.html) so DB
write errors from updateProjectHtml don't silently revert on next load; keep the
same usage of ensureSectionId and replaceSectionInHtml to build updatedHtml but
ensure updateProjectHtml(currentProject.id, updatedHtml) is awaited/handled and
failures are reported and recovered from.
- Around line 31-35: The handleMessage callback currently accepts any
MessageEvent whose data.type startsWith('offpage:') which allows spoofed
messages; update handleMessage to also validate the sender by checking
event.source === iframeRef.current?.contentWindow (the iframe created via
srcDoc) before processing 'offpage:*' messages so only the preview iframe can
trigger actions like offpage:wysiwyg-done; locate the handler named
handleMessage (useCallback) and add this event.source check early and return if
it fails.

In `@src/lib/helperScript.ts`:
- Around line 78-103: The onClick handler currently calls e.preventDefault() and
e.stopPropagation() unconditionally which blocks native caret placement when
editing; update onClick so that preventDefault()/stopPropagation() are skipped
for WYSIWYG editing: move or guard the calls so they run only when currentMode
!== 'wysiwyg' (or when the clicked section is not the active contentEditable
element), then proceed to set selectedEl, positionOverlay and call
enableContentEditable(section) as before; ensure enableContentEditable(section)
is called before any logic that might re-attach capture listeners so native
clicks and selections inside the contentEditable section are allowed.

In `@src/lib/htmlSections.ts`:
- Around line 19-22: The current openTagRegex (and the replacement logic around
lines 66-70) is too strict: it only allows \w+ tag names and double-quoted
attributes and assumes the element starts at byte 0, so indented output, custom
elements (e.g. hero-banner), or single-quoted attributes are missed or
duplicated. Update the matching and replacement to use a more permissive,
case-insensitive/singleline regex that matches valid HTML tag names (e.g.
[A-Za-z][A-Za-z0-9:-]*), accepts single or double quotes for data-offpage-id,
and does not require the tag to be at the file start; then when replacing,
search for an existing data-offpage-id attribute with a robust attribute regex
and replace its value if present, otherwise insert the attribute into the
existing opening tag while preserving surrounding whitespace/indentation and
quote style; apply this change to openTagRegex and the attribute-replace/insert
logic referenced around the code that handles the first element (the block
currently at lines ~66-70).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4a49bd9f-7164-4280-94e5-33aad1db5c11

📥 Commits

Reviewing files that changed from the base of the PR and between 17d31b5 and 1a8c04c.

📒 Files selected for processing (12)
  • CLAUDE.md
  • docs/superpowers/plans/2026-04-02-phase3-preview-editing.md
  • src/components/chat/ChatPanel.tsx
  • src/components/preview/InlineEditBar.tsx
  • src/components/preview/PreviewFrame.tsx
  • src/components/preview/PreviewToolbar.tsx
  • src/hooks/useAiStream.ts
  • src/lib/helperScript.ts
  • src/lib/htmlSections.ts
  • src/lib/iframeBridge.ts
  • src/pages/ProjectPage.tsx
  • src/stores/editorStore.ts
✅ Files skipped from review due to trivial changes (1)
  • CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/components/chat/ChatPanel.tsx
  • src/hooks/useAiStream.ts

Comment on lines +16 to +17
| File | Responsibility |
|------|---------------|
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add blank lines around markdown tables (MD058).

Both tables should be surrounded by blank lines to satisfy markdownlint and keep docs CI clean.

Also applies to: 27-28

🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 16-16: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 16
- 17, The markdown tables (e.g., the table starting with "| File |
Responsibility |") violate MD058; add a blank line immediately before and after
each table instance in this document so both tables are surrounded by blank
lines (this will satisfy MD058 and the docs CI).

Comment on lines +192 to +200
window.parent.postMessage({
type: 'offpage:section-selected',
payload: {
id: section.getAttribute('data-offpage-id'),
tagName: section.tagName,
outerHtml: section.outerHTML
}
}, '*');
} else if (currentMode === 'wysiwyg') {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden postMessage contract (origin/source validation missing).

The snippets currently use '*' and process any message with matching type. Add strict event.source and origin checks (and schema guards) before acting on messages to reduce cross-window message injection risk.

Also applies to: 230-237, 243-250, 264-305, 368-370

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 192
- 200, The postMessage usage is currently broadcasting with '*' and your message
handler accepts any message with a matching type, which is unsafe; replace the
wildcard with a strict target origin when calling window.parent.postMessage (use
a configured allowed origin) and harden the message handler(s) that consume
messages by validating event.origin against an allowlist, ensuring event.source
=== window.parent (or the expected Window reference), and performing schema/type
guards on event.data (check event.data.type and required payload fields like id,
tagName, outerHtml) before acting; update the code paths around the postMessage
that sends type 'offpage:section-selected' and the corresponding message event
listener(s) that branch on currentMode/wysiwyg to enforce these origin/source
checks and payload validation.

Comment on lines +815 to +816
Add a `generateSection` method that sends only the section's HTML to the AI with the `editSection` system prompt, then replaces the section in the full page HTML.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use “full-page HTML” as a compound modifier.

Minor wording fix for readability/grammar.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~815-~815: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...rompt, then replaces the section in the full page HTML. - [ ] **Step 1: Update useAiStre...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 815
- 816, Edit the documentation text describing generateSection so it uses the
compound modifier "full-page HTML" (hyphenated) for clarity; specifically update
the sentence that reads about sending the section's HTML and replacing the
section in the full page HTML to use "full-page HTML" and ensure references to
the generateSection method and the editSection system prompt remain accurate.

Comment on lines +934 to +935
const previewIframeRef = useRef<HTMLIFrameElement>(null);

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inline edit preview update path is broken in the current plan snippet.

previewIframeRef is never attached to PreviewFrame, and the replace message uses sectionHtml (old HTML) instead of the generated section result. This will either no-op or overwrite with stale content.

Proposed fix (plan snippet adjustment)
- const previewIframeRef = useRef<HTMLIFrameElement>(null);
+ const previewIframeRef = useRef<HTMLIFrameElement>(null);

...
- <PreviewFrame />
+ <PreviewFrame iframeRef={previewIframeRef} />

...
- payload: { id: sectionId, newHtml: sectionHtml },
+ payload: { id: sectionId, newHtml: result },
- export function PreviewFrame() {
-   const iframeRef = useRef<HTMLIFrameElement>(null);
+ interface PreviewFrameProps {
+   iframeRef?: React.RefObject<HTMLIFrameElement | null>;
+ }
+ export function PreviewFrame({ iframeRef: externalRef }: PreviewFrameProps) {
+   const internalRef = useRef<HTMLIFrameElement>(null);
+   const iframeRef = externalRef ?? internalRef;

Also applies to: 959-963, 976-977

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 934
- 935, previewIframeRef is declared but never attached to the PreviewFrame and
the replace message uses the stale sectionHtml instead of the newly generated
section result; attach previewIframeRef as the ref prop on the PreviewFrame
component and update all replace message payloads (the code that sends the
"replace" message to the iframe) to use the generated section result variable
(the variable that holds the new HTML after generation) instead of sectionHtml;
apply the same fix to the other occurrences noted (around the other replace
usages) so the iframe receives the fresh content via
previewIframeRef.current.postMessage.

Comment on lines +9 to +30
export function InlineEditBar({ onSubmit, disabled }: InlineEditBarProps) {
const [prompt, setPrompt] = useState('');
const { selectedSection, editMode, clearSelection } = useEditorStore();

if (editMode !== 'inline' || !selectedSection) return null;

const handleSubmit = () => {
const text = prompt.trim();
if (!text) return;
onSubmit(selectedSection.id, selectedSection.outerHtml, text);
setPrompt('');
};

const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
handleSubmit();
}
if (e.key === 'Escape') {
clearSelection();
}
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent duplicate inline edits while the request is starting.

Lines 18-19 clear the prompt and fire-and-forget onSubmit. A fast double-click/Enter can enqueue multiple AI requests before streaming flips, and a rejected submit loses the user's text.

⏳ Proposed fix
 interface InlineEditBarProps {
-  onSubmit: (sectionId: string, sectionHtml: string, prompt: string) => void;
+  onSubmit: (sectionId: string, sectionHtml: string, prompt: string) => Promise<void>;
   disabled: boolean;
 }
 
 export function InlineEditBar({ onSubmit, disabled }: InlineEditBarProps) {
   const [prompt, setPrompt] = useState('');
+  const [submitting, setSubmitting] = useState(false);
   const { selectedSection, editMode, clearSelection } = useEditorStore();
@@
-  const handleSubmit = () => {
+  const handleSubmit = async () => {
     const text = prompt.trim();
-    if (!text) return;
-    onSubmit(selectedSection.id, selectedSection.outerHtml, text);
-    setPrompt('');
+    if (!text || submitting) return;
+    setSubmitting(true);
+    try {
+      await onSubmit(selectedSection.id, selectedSection.outerHtml, text);
+      setPrompt('');
+    } finally {
+      setSubmitting(false);
+    }
   };
@@
     if (e.key === 'Enter' && !e.shiftKey) {
       e.preventDefault();
-      handleSubmit();
+      void handleSubmit();
     }
@@
-        disabled={disabled}
+        disabled={disabled || submitting}
@@
-        onClick={handleSubmit}
-        disabled={disabled || !prompt.trim()}
+        onClick={() => void handleSubmit()}
+        disabled={disabled || submitting || !prompt.trim()}

Also applies to: 42-51

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/preview/InlineEditBar.tsx` around lines 9 - 30, The component
allows duplicate submissions because handleSubmit/fire-onSubmit is
fire-and-forget; add a local isSubmitting state in InlineEditBar and guard
submissions (return early if isSubmitting), set isSubmitting = true before
calling onSubmit(selectedSection.id, selectedSection.outerHtml, text) and await
it, only clear the prompt on successful completion, and in a finally block set
isSubmitting = false so subsequent attempts are allowed; also have handleKeyDown
ignore Enter when isSubmitting and ensure clearSelection still works on Escape.
Reference: InlineEditBar, handleSubmit, handleKeyDown, onSubmit,
selectedSection, clearSelection, setPrompt.

Comment on lines +31 to +35
const handleMessage = useCallback(
(event: MessageEvent<IframeToParentMessage>) => {
const data = event.data;
if (!data || !data.type || !data.type.startsWith('offpage:')) return;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the preview message handler currently accepts offpage:* messages
# without checking the sender window.
rg -n -C3 "addEventListener\\('message'|startsWith\\('offpage:'\\)|event\\.source|event\\.origin" src/components/preview/PreviewFrame.tsx

Repository: Szymon0C/Offpage

Length of output: 547


🏁 Script executed:

cat -n src/components/preview/PreviewFrame.tsx | head -100

Repository: Szymon0C/Offpage

Length of output: 4142


🏁 Script executed:

rg -n "iframe|srcDoc|sandbox" src/components/preview/PreviewFrame.tsx

Repository: Szymon0C/Offpage

Length of output: 754


🏁 Script executed:

rg -n "offpage:" src/components/preview/PreviewFrame.tsx | head -20

Repository: Szymon0C/Offpage

Length of output: 335


Add event.source validation to the message handler.

Line 34 accepts offpage:* messages from any origin without validating the sender. This allows external frames, windows, or browser extensions to spoof iframe-internal messages like offpage:wysiwyg-done, triggering unintended section edits and database persists. Since the iframe is created with srcDoc (line 86), you can safely validate event.source === iframeRef.current?.contentWindow to ensure only the preview iframe can drive these operations.

Proposed fix
   const handleMessage = useCallback(
     (event: MessageEvent<IframeToParentMessage>) => {
+      if (event.source !== iframeRef.current?.contentWindow) return;
       const data = event.data;
       if (!data || !data.type || !data.type.startsWith('offpage:')) return;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleMessage = useCallback(
(event: MessageEvent<IframeToParentMessage>) => {
const data = event.data;
if (!data || !data.type || !data.type.startsWith('offpage:')) return;
const handleMessage = useCallback(
(event: MessageEvent<IframeToParentMessage>) => {
if (event.source !== iframeRef.current?.contentWindow) return;
const data = event.data;
if (!data || !data.type || !data.type.startsWith('offpage:')) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/preview/PreviewFrame.tsx` around lines 31 - 35, The
handleMessage callback currently accepts any MessageEvent whose data.type
startsWith('offpage:') which allows spoofed messages; update handleMessage to
also validate the sender by checking event.source ===
iframeRef.current?.contentWindow (the iframe created via srcDoc) before
processing 'offpage:*' messages so only the preview iframe can trigger actions
like offpage:wysiwyg-done; locate the handler named handleMessage (useCallback)
and add this event.source check early and return if it fails.

Comment on lines +45 to +54
case 'offpage:wysiwyg-done':
if (currentProject) {
const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id);
const updatedHtml = replaceSectionInHtml(
currentProject.html,
data.payload.id,
newSectionHtml
);
updateProjectHtml(currentProject.id, updatedHtml);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle WYSIWYG save failures instead of dropping the promise.

updateProjectHtml is async in src/stores/projectStore.ts:73-89, but Line 53 fire-and-forgets it. If the DB write rejects, the iframe already shows the edit while the persisted project quietly reverts on the next load.

🧯 Proposed fix
-            updateProjectHtml(currentProject.id, updatedHtml);
+            void updateProjectHtml(currentProject.id, updatedHtml).catch((error) => {
+              console.error('Failed to persist WYSIWYG edit:', error);
+            });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case 'offpage:wysiwyg-done':
if (currentProject) {
const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id);
const updatedHtml = replaceSectionInHtml(
currentProject.html,
data.payload.id,
newSectionHtml
);
updateProjectHtml(currentProject.id, updatedHtml);
}
case 'offpage:wysiwyg-done':
if (currentProject) {
const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id);
const updatedHtml = replaceSectionInHtml(
currentProject.html,
data.payload.id,
newSectionHtml
);
void updateProjectHtml(currentProject.id, updatedHtml).catch((error) => {
console.error('Failed to persist WYSIWYG edit:', error);
});
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/preview/PreviewFrame.tsx` around lines 45 - 54, The handler
for 'offpage:wysiwyg-done' is fire-and-forgetting the async updateProjectHtml
call; change it to handle promise rejections by awaiting or chaining a .catch on
updateProjectHtml (in PreviewFrame.tsx) and react to failures (log via your
logger, surface an error to the UI, and/or revert the iframe edit by reloading
currentProject.html) so DB write errors from updateProjectHtml don't silently
revert on next load; keep the same usage of ensureSectionId and
replaceSectionInHtml to build updatedHtml but ensure
updateProjectHtml(currentProject.id, updatedHtml) is awaited/handled and
failures are reported and recovered from.

Comment on lines +57 to +64
case 'offpage:ready':
// iframe loaded, send current mode
if (iframeRef.current) {
sendToIframe(iframeRef.current, {
type: 'offpage:set-mode',
payload: { mode: editMode },
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Persist generated data-offpage-ids when the iframe becomes ready.

detectSections() in src/lib/helperScript.ts:18-30 adds ids only inside the iframe DOM before it posts offpage:ready. Lines 60-63 just resend the mode, so currentProject.html stays id-less and the later replaceSectionInHtml(currentProject.html, id, ...) calls here and in src/hooks/useAiStream.ts:72-117 no-op on the first inline/WYSIWYG edit. Use the existing bridge to sync a helper-free copy of the tagged DOM back into the store before edits are enabled.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/preview/PreviewFrame.tsx` around lines 57 - 64, When handling
'offpage:ready' in PreviewFrame.tsx, after confirming iframeRef.current,
additionally request the helper-free, tagged DOM (the version with
data-offpage-id added by detectSections) from the iframe via the existing bridge
instead of only resending the mode; then persist that returned HTML into the app
state so currentProject.html contains those data-offpage-id attributes before
edits are enabled (this ensures later calls to replaceSectionInHtml in
useAiStream.ts and elsewhere are not no-ops). Concretely: in the case
'offpage:ready' branch (where sendToIframe is used), send a message to ask for
the full tagged document, handle the response event to receive the HTML string,
and update the project store/currentProject.html with that payload prior to
allowing inline/WYSIWYG edits or calling replaceSectionInHtml.

Comment thread src/lib/helperScript.ts
Comment on lines +78 to +103
function onClick(e) {
if (currentMode === 'view') return;
e.preventDefault();
e.stopPropagation();
const section = findParentSection(e.target);
if (!section) return;

selectedEl = section;
positionOverlay(section);

if (currentMode === 'inline') {
overlay.style.borderColor = '#6366f1';
overlay.style.backgroundColor = 'rgba(99,102,241,0.05)';
window.parent.postMessage({
type: 'offpage:section-selected',
payload: {
id: section.getAttribute('data-offpage-id'),
tagName: section.tagName,
outerHtml: section.outerHTML
}
}, '*');
} else if (currentMode === 'wysiwyg') {
overlay.style.borderColor = '#f59e0b';
overlay.style.backgroundColor = 'rgba(245,158,11,0.05)';
enableContentEditable(section);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the helper still intercepts all clicks in capture phase even after enabling contentEditable.
rg -n -C2 "addEventListener\\('click', onClick, true\\)|preventDefault\\(|contentEditable = 'true'" src/lib/helperScript.ts

Repository: Szymon0C/Offpage

Length of output: 660


🏁 Script executed:

#!/bin/bash
# Get broader context around the onClick function and check the addEventListener calls
sed -n '70,115p' src/lib/helperScript.ts

Repository: Szymon0C/Offpage

Length of output: 1465


🏁 Script executed:

#!/bin/bash
# Check what's around lines 208-210 mentioned as "Also applies to"
sed -n '200,220p' src/lib/helperScript.ts

Repository: Szymon0C/Offpage

Length of output: 500


🏁 Script executed:

#!/bin/bash
# Search for other click or input handlers that might interact with this
rg -n "addEventListener.*click|addEventListener.*input|addEventListener.*mousedown|addEventListener.*mouseup" src/lib/helperScript.ts

Repository: Szymon0C/Offpage

Length of output: 179


Don't preventDefault() once the selected section is being edited.

The capture-phase body listener blocks every click in WYSIWYG mode. After the section is made contentEditable on line 112, the prevented clicks block default caret positioning and text selection inside the active section, making precise edits impossible.

✍️ Proposed fix
   function onClick(e) {
     if (currentMode === 'view') return;
-    e.preventDefault();
-    e.stopPropagation();
     const section = findParentSection(e.target);
     if (!section) return;
+
+    const editingSelectedSection =
+      currentMode === 'wysiwyg' && selectedEl === section && section.isContentEditable;
+    if (!editingSelectedSection) {
+      e.preventDefault();
+      e.stopPropagation();
+    }
 
     selectedEl = section;
     positionOverlay(section);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function onClick(e) {
if (currentMode === 'view') return;
e.preventDefault();
e.stopPropagation();
const section = findParentSection(e.target);
if (!section) return;
selectedEl = section;
positionOverlay(section);
if (currentMode === 'inline') {
overlay.style.borderColor = '#6366f1';
overlay.style.backgroundColor = 'rgba(99,102,241,0.05)';
window.parent.postMessage({
type: 'offpage:section-selected',
payload: {
id: section.getAttribute('data-offpage-id'),
tagName: section.tagName,
outerHtml: section.outerHTML
}
}, '*');
} else if (currentMode === 'wysiwyg') {
overlay.style.borderColor = '#f59e0b';
overlay.style.backgroundColor = 'rgba(245,158,11,0.05)';
enableContentEditable(section);
}
function onClick(e) {
if (currentMode === 'view') return;
const section = findParentSection(e.target);
if (!section) return;
const editingSelectedSection =
currentMode === 'wysiwyg' && selectedEl === section && section.isContentEditable;
if (!editingSelectedSection) {
e.preventDefault();
e.stopPropagation();
}
selectedEl = section;
positionOverlay(section);
if (currentMode === 'inline') {
overlay.style.borderColor = '#6366f1';
overlay.style.backgroundColor = 'rgba(99,102,241,0.05)';
window.parent.postMessage({
type: 'offpage:section-selected',
payload: {
id: section.getAttribute('data-offpage-id'),
tagName: section.tagName,
outerHtml: section.outerHTML
}
}, '*');
} else if (currentMode === 'wysiwyg') {
overlay.style.borderColor = '#f59e0b';
overlay.style.backgroundColor = 'rgba(245,158,11,0.05)';
enableContentEditable(section);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/helperScript.ts` around lines 78 - 103, The onClick handler currently
calls e.preventDefault() and e.stopPropagation() unconditionally which blocks
native caret placement when editing; update onClick so that
preventDefault()/stopPropagation() are skipped for WYSIWYG editing: move or
guard the calls so they run only when currentMode !== 'wysiwyg' (or when the
clicked section is not the active contentEditable element), then proceed to set
selectedEl, positionOverlay and call enableContentEditable(section) as before;
ensure enableContentEditable(section) is called before any logic that might
re-attach capture listeners so native clicks and selections inside the
contentEditable section are allowed.

Comment thread src/lib/htmlSections.ts
Comment on lines +19 to +22
const openTagRegex = new RegExp(
`(<(\\w+)[^>]*data-offpage-id\\s*=\\s*"${escapedId}"[^>]*>)`,
's'
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make section ID handling resilient to real HTML output.

Line 20 only matches \w+ tag names and a double-quoted data-offpage-id, while Lines 66-70 assume the first element starts at byte 0 and that the existing attribute already has one exact spelling. Indented model output, single-quoted attributes, or custom elements like <hero-banner> will silently skip replacement or duplicate the id, which breaks subsequent section edits.

🛠️ Proposed fix
   const openTagRegex = new RegExp(
-    `(<(\\w+)[^>]*data-offpage-id\\s*=\\s*"${escapedId}"[^>]*>)`,
+    `(<([A-Za-z][A-Za-z0-9:-]*)[^>]*data-offpage-id\\s*=\\s*["']${escapedId}["'][^>]*>)`,
     's'
   );
@@
 export function ensureSectionId(sectionHtml: string, sectionId: string): string {
-  // Check if data-offpage-id already present
-  if (sectionHtml.includes(`data-offpage-id="${sectionId}"`)) {
-    return sectionHtml;
-  }
-  // Add to first opening tag
-  return sectionHtml.replace(/^(<\w+)/, `$1 data-offpage-id="${sectionId}"`);
+  const template = document.createElement('template');
+  template.innerHTML = sectionHtml.trim();
+  const root = template.content.firstElementChild;
+  if (!root) return sectionHtml;
+  root.setAttribute('data-offpage-id', sectionId);
+  return root.outerHTML;
 }

Also applies to: 64-70

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/htmlSections.ts` around lines 19 - 22, The current openTagRegex (and
the replacement logic around lines 66-70) is too strict: it only allows \w+ tag
names and double-quoted attributes and assumes the element starts at byte 0, so
indented output, custom elements (e.g. hero-banner), or single-quoted attributes
are missed or duplicated. Update the matching and replacement to use a more
permissive, case-insensitive/singleline regex that matches valid HTML tag names
(e.g. [A-Za-z][A-Za-z0-9:-]*), accepts single or double quotes for
data-offpage-id, and does not require the tag to be at the file start; then when
replacing, search for an existing data-offpage-id attribute with a robust
attribute regex and replace its value if present, otherwise insert the attribute
into the existing opening tag while preserving surrounding
whitespace/indentation and quote style; apply this change to openTagRegex and
the attribute-replace/insert logic referenced around the code that handles the
first element (the block currently at lines ~66-70).

- setup-sidecar.sh: fix rustc --print host-tuple (was host-triple)
- capabilities: scope shell permissions to llama-server only
- ai.rs: add system_prompt and max_tokens as proper command params
- sidecar.rs: fix race condition with atomic check+spawn, Arc<Mutex>
  for child, cleanup on timeout, clear child on termination
- tauri.conf.json: run setup-sidecar.sh in beforeBuildCommand
- ChatInput: IME composition handling, focus ring, aria-label
- ChatPanel: null safety check on html
- useAiStream: use snake_case params matching Rust API
- prompts.ts: filter duplicate user prompt from chat history
- projectStore: wrap snapshot version in transaction

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Szymon0C Szymon0C merged commit 98201cf into main Apr 2, 2026
1 check was pending
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant