feat: Phase 2 (AI Core) + Phase 3 (Preview & Editing)#4
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add setup-sidecar.sh to download platform-specific llama-server binary from llama.cpp releases. Configure Tauri to bundle the sidecar binary and update .gitignore to exclude downloaded binaries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (10)
WalkthroughAdds a llama-server sidecar and streaming AI pipeline: backend sidecar lifecycle, hardware detection, and SSE-based AI streaming; Tauri plugin/config updates and a setup script for binaries; frontend stores, hooks, and UI components for chat, preview, and inline editing integrated with project snapshots. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Frontend (User)
participant Chat as ChatPanel / useAiStream
participant Tauri as Tauri Backend
participant Sidecar as Llama-server Sidecar
participant DB as Database
Note over User,Sidecar: Sidecar Startup Flow
User->>Chat: startSidecar(modelPath)
Chat->>Tauri: invoke('start_sidecar', {modelPath, port})
Tauri->>Sidecar: spawn process (llama-server)
Tauri-->>Chat: emit 'sidecar-output' (stdout/stderr)
Tauri->>Sidecar: poll GET /health (until ready or timeout)
Sidecar-->>Tauri: 200 OK
Tauri-->>Chat: return running port
Note over User,Sidecar: AI Chat Streaming Flow
User->>Chat: send message
Chat->>Tauri: invoke('stream_generate', {messages, systemPrompt, max_tokens})
Tauri->>Sidecar: POST /v1/chat/completions {stream: true}
Sidecar-->>Tauri: SSE data: ... (data: lines)
Tauri->>Chat: emit 'ai-chunk' events for tokens
Chat->>Chat: append tokens, render streamBuffer
Sidecar-->>Tauri: data: [DONE]
Tauri-->>Chat: return final aggregated response
Chat->>DB: finalizeStream() (persist assistant message / snapshot)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
12 tasks covering iframe bridge, helper script injection, section detection, inline AI editing, WYSIWYG contentEditable, responsive viewport toggle, and integration wiring. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (8)
src/stores/aiStore.ts (2)
53-60:stopSidecardoes not clear error state or handle failure consistently.If
stopSidecarfails, the user won't know. Also, a successful stop should clear any previous error state.♻️ Suggested improvement
stopSidecar: async () => { try { await invoke('stop_sidecar'); - set({ sidecarStatus: 'stopped' }); + set({ sidecarStatus: 'stopped', error: null }); } catch (error) { console.error('Failed to stop sidecar:', error); + const msg = error instanceof Error ? error.message : String(error); + set({ sidecarStatus: 'error', error: msg }); } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/stores/aiStore.ts` around lines 53 - 60, The stopSidecar function currently only logs failures and doesn't update the store error state or clear prior errors on success; update stopSidecar (the async function that calls invoke('stop_sidecar')) to on success call set(...) to set sidecarStatus: 'stopped' and clear any existing error (e.g., sidecarError: null), and on catch call set(...) to record the failure into sidecarError (include the error message/object) so the UI can surface the failure instead of just console.error.
30-37: Inconsistent error handling:detectHardwaresilently swallows failures.Unlike
startSidecarwhich updateserrorstate on failure,detectHardwareonly logs to console. Consider settingerrorstate so the UI can inform the user if hardware detection fails.♻️ Suggested improvement
detectHardware: async () => { try { const info = await invoke<HardwareInfo>('get_hardware_info'); set({ hardwareInfo: info }); } catch (error) { console.error('Hardware detection failed:', error); + const msg = error instanceof Error ? error.message : String(error); + set({ error: msg }); } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/stores/aiStore.ts` around lines 30 - 37, The detectHardware function currently only logs failures to console; update its catch block to set the store's error state (same pattern used by startSidecar) so the UI can react—use the existing set function to call set({ error: <error message or Error object> }) and optionally clear hardwareInfo (e.g., set({ hardwareInfo: null })) when detection fails; ensure you capture the caught error variable (error) and pass a useful message or the error itself to the error state.src-tauri/src/hardware.rs (1)
61-70: Redundant match arm: both final cases returnMinimum.Lines 67-68 both match to
HardwareTier::Minimum. The second arm is unreachable or redundant:♻️ Suggested fix
fn classify_tier(ram_gb: f64, gpu: &GpuType) -> HardwareTier { match gpu { GpuType::Metal if ram_gb >= 32.0 => HardwareTier::Optimal, GpuType::Nvidia { vram_gb } if ram_gb >= 32.0 && *vram_gb >= 8.0 => HardwareTier::Optimal, GpuType::Metal if ram_gb >= 16.0 => HardwareTier::Recommended, GpuType::Nvidia { vram_gb } if *vram_gb >= 6.0 => HardwareTier::Recommended, - _ if ram_gb >= 8.0 => HardwareTier::Minimum, _ => HardwareTier::Minimum, } }If the intent was to distinguish "meets minimum RAM" vs "below minimum", consider returning a different tier or adding a
BelowMinimumvariant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/hardware.rs` around lines 61 - 70, The match in classify_tier contains two final arms both returning HardwareTier::Minimum, making the second redundant; update classify_tier to clearly distinguish systems that meet the minimum RAM from those below it (e.g., change the last arm to return a new HardwareTier::BelowMinimum variant or remove the duplicate and make the final fallback return BelowMinimum), and if you add a new variant update the HardwareTier enum and any match exhaustiveness sites (refer to classify_tier and HardwareTier when making the change).src-tauri/src/ai.rs (2)
90-103: Silent parse failures may hide SSE format issues.When
serde_json::from_strfails, the chunk is silently skipped. Consider logging parse failures in debug builds to aid troubleshooting:♻️ Suggested improvement
- if let Ok(parsed) = serde_json::from_str::<ChatChunkResponse>(data) { + match serde_json::from_str::<ChatChunkResponse>(data) { + Ok(parsed) => { for choice in &parsed.choices { if let Some(content) = &choice.delta.content { full_response.push_str(content); let _ = app.emit( "ai-chunk", AiChunk { token: content.clone(), done: choice.finish_reason.is_some(), }, ); } } + } + Err(e) => { + #[cfg(debug_assertions)] + eprintln!("Failed to parse SSE chunk: {} - data: {}", e, data); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/ai.rs` around lines 90 - 103, The code silently ignores JSON parse errors when calling serde_json::from_str::<ChatChunkResponse>(data); update the parsing branch around that call (the block building full_response and emitting "ai-chunk") to log parse failures in debug builds: capture the Err(e) from serde_json::from_str and call debug!(...) or eprintln! (depending on project logging conventions) with the error and the raw data string so developers can inspect malformed SSE chunks; keep the existing successful-path behavior that pushes to full_response and emits AiChunk on parsed.choices.
47-53: No timeout configured for the HTTP request.If the sidecar becomes unresponsive, this request will block indefinitely. Consider adding a timeout:
♻️ Suggested fix
- let client = reqwest::Client::new(); + let client = reqwest::Client::builder() + .timeout(std::time::Duration::from_secs(300)) + .build() + .map_err(|e| format!("Failed to create client: {}", e))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/ai.rs` around lines 47 - 53, The request uses reqwest::Client::new() with no timeout, which can hang if the sidecar is unresponsive; replace the client construction with reqwest::Client::builder() and set a sensible timeout (e.g., Duration::from_secs(10)) via .timeout(...) before calling .build(), and ensure the Duration type is imported; update the code that currently calls reqwest::Client::new() and the subsequent .post(...).json(...).send().await path to use the newly built client with the configured timeout.src/hooks/useAiStream.ts (1)
58-61: On error,streamBuffermay contain partial content that isn't cleared.When generation fails mid-stream, the buffer retains partial tokens. Consider clearing the buffer on error to avoid displaying stale/incomplete content on the next attempt.
♻️ Suggested fix
} catch (error) { console.error('AI generation failed:', error); setStreaming(false); + // Consider adding a clearStreamBuffer action or passing false to trigger reset return null; } finally {In
chatStore.ts,setStreaming(false)already preserves the buffer. You may want to add a dedicatedclearStreamBufferaction or adjustsetStreaming(false)behavior for error scenarios.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useAiStream.ts` around lines 58 - 61, The catch block in useAiStream.ts currently leaves streamBuffer populated when an error occurs; add logic to clear the partial buffer there by invoking a new or existing action (e.g., call chatStore.clearStreamBuffer() or change chatStore.setStreaming(false, { clearBuffer: true })) so the partial tokens are removed on error, while still calling setStreaming(false) to stop streaming; if clearStreamBuffer doesn't exist, add it to chatStore.ts (or extend setStreaming to accept a clear-on-error flag) and ensure useAiStream.ts calls that action in the catch branch.src/stores/chatStore.ts (1)
27-34: Edge case:loadMessagesreplaces the entire messages array.If the user switches projects while streaming,
loadMessageswill be called (viaChatPaneluseEffect), replacingmessageswith the new project's history. ThestreamBufferpersists independently, butfinalizeStreamwill then save the streamed content to the wrong project if the user switched mid-stream.Consider guarding against this in
ChatPanelby disabling project switches during streaming or aborting the stream on project change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/stores/chatStore.ts` around lines 27 - 34, The loadMessages function replaces the entire messages array which can mismatch with an ongoing stream: if a user switches projects while streaming, streamBuffer and finalizeStream may persist and save into the wrong project. Fix this by ensuring ChatPanel's useEffect (which calls loadMessages) first checks the chatStore streaming state (or exposes an isStreaming flag) and either disable project switches during streaming or abort the active stream before calling loadMessages; alternatively, have ChatPanel call an exposed abortStream/clearStreamBuffer method on the store (referencing streamBuffer, finalizeStream, loadMessages, ChatPanel useEffect) to cancel the stream and clear streamBuffer when projectId changes so streamed content cannot be saved to the wrong project.src/lib/prompts.ts (1)
35-45: Consider stricter typing for therolefield.Using
stringforroleloses type safety. A union type ensures only valid roles are used.✨ Proposed type improvement
+type ChatRole = 'user' | 'assistant' | 'system'; +type ChatMessage = { role: ChatRole; content: string }; + export function buildGenerateMessages( userPrompt: string, siteType: string -): Array<{ role: string; content: string }> { +): ChatMessage[] { return [ { role: 'user', content: `Create a ${siteType} website: ${userPrompt}`, }, ]; }Apply the same
ChatMessagetype to the other builder functions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/prompts.ts` around lines 35 - 45, The buildGenerateMessages function currently types roles as a plain string; define a specific ChatMessage type (e.g., type ChatMessage = { role: 'user' | 'assistant' | 'system'; content: string }) and change buildGenerateMessages's return type to Array<ChatMessage> (and its internal role value to the 'user' literal), then update other message builder functions to use the same ChatMessage type so only valid roles are allowed across the codebase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/setup-sidecar.sh`:
- Line 8: The TARGET assignment uses an invalid rustc --print option; update the
assignment that sets TARGET to use rustc --print host-tuple (instead of
host-triple) and keep the existing fallback to "unknown", and add a secondary
fallback that parses rustc -Vv output when --print host-tuple is not supported;
specifically modify the TARGET variable assignment and any related logic that
checks TARGET so it first tries `rustc --print host-tuple 2>/dev/null`, then if
that fails attempts parsing `rustc -Vv` for the host triple, and only then falls
back to "unknown" to avoid triggering the unsupported-platform branch.
In `@src-tauri/capabilities/default.json`:
- Around line 11-12: The permissions in default.json currently grant broad shell
access via "shell:allow-spawn" and "shell:allow-execute"; restrict them to only
the llama sidecar by replacing the bare permissions with scoped shell
permissions that include the sidecar name used in your code (llama-server) —
align the permission "name" value with the sidecar identifier referenced in
sidecar.rs where you call .sidecar("llama-server").spawn(), and remove the
global execute/spawn entries so only the scoped permission remains.
In `@src-tauri/src/ai.rs`:
- Around line 29-45: The stream_generate command currently only accepts messages
but must also accept the frontend's systemPrompt and maxTokens; update the
stream_generate function signature to include system_prompt: Option<String> (or
String) and max_tokens: Option<u32>, then build the request body so it prepends
the system prompt to the messages array (or inserts a message with role "system"
using system_prompt) and include "max_tokens": max_tokens in the JSON body sent
from the body variable; ensure you update usage of messages and body
construction in stream_generate and handle None values safely when constructing
the final messages list.
In `@src-tauri/src/sidecar.rs`:
- Around line 111-131: The health-check timeout path currently returns an error
but leaves the spawned child process in SidecarState, blocking future starts;
modify the timeout branch in the health-poll loop (inside the start_sidecar
async flow where health_url, client, start, timeout are defined) to perform
cleanup: take the SidecarState.child (e.g. let mut child_opt =
sidecar_state.child.take()), and if Some(child) then attempt to kill and await
the child (child.kill() and child.wait().await or equivalent for the child
type), handling/logging errors, ensuring sidecar_state.child remains None, then
return the timeout Err; this ensures the spawned process is terminated and state
is cleared on health-check timeout.
- Around line 96-109: The monitoring task handling CommandEvent::Terminated
should clear SidecarState.child so sidecar_status no longer reports running:
change SidecarState.child to be shareable (wrap it in Arc<Mutex<...>> or make
SidecarState itself Arc) and pass that Arc into the spawned monitor task, then
when matching CommandEvent::Terminated set state.child = None (acquire the
mutex) before emitting the "sidecar-output" event and breaking; update any
constructors/usages of SidecarState/sidecar_status to use the new Arc<Mutex>
type accordingly.
- Around line 34-69: The current TOCTOU happens because you release the Mutex
from state.child between checking child.is_some() and assigning the spawned
process, allowing concurrent starts; fix by making the check-and-set atomic:
either hold the same lock across the spawn call so no other task can pass the
check (wrap the sidecar().spawn() while holding state.child.lock()), or use a
reservation pattern where you acquire state.child.lock(), check is_some(), set a
temporary placeholder/enum (e.g., Starting) to mark that a start is in progress,
drop the lock, perform sidecar("llama-server").spawn(), then reacquire the lock
and replace the placeholder with the real child (or clear it on spawn error);
update references to state.child.lock(), sidecar("llama-server"), spawn(),
rx/child accordingly.
In `@src-tauri/tauri.conf.json`:
- Line 35: The tauri bundling fails because the configured externalBin
("externalBin") expects src-tauri/binaries/llama-server-{TARGET} to exist but
the sidecar setup script is not run; update tauri.conf.json to run the sidecar
setup before building by adding a beforeBuildCommand that invokes
scripts/setup-sidecar.sh (so the binary is created prior to bundling), or
alternatively add a clear README prerequisite instructing developers/CI to run
scripts/setup-sidecar.sh before invoking npm run build / npm run dev; ensure you
modify the beforeBuildCommand key in tauri.conf.json (or document the
requirement) so the external binary exists when tauri bundles.
In `@src/components/chat/ChatInput.tsx`:
- Around line 33-48: The textarea currently strips the browser focus ring via
"outline-none" and the send button uses an ambiguous "↑" label; restore an
accessible focus state by removing or replacing "outline-none" on the textarea
with a visible focus style (e.g., add a focus / focus-visible ring class such as
focus-visible:ring-2 focus-visible:ring-[var(--color-accent)] or similar) and
make the send button meaningful to assistive tech by replacing the lone arrow
with a textual label (e.g., "Send") or keep the arrow visually but add an
aria-label="Send" and/or an offscreen <span> for screen readers; ensure the
button still uses the existing disabled logic (disabled || !value.trim()) and
the click handler handleSend and keyboard handler handleKeyDown remain
unchanged.
- Around line 23-27: The Enter key handler in handleKeyDown currently submits on
Enter without guarding IME composition; update handleKeyDown in ChatInput (the
KeyboardEvent handler) to return early if e.nativeEvent?.isComposing is true so
that Enter during IME composition does not trigger handleSend; keep the existing
checks for e.key === 'Enter' and !e.shiftKey and only call e.preventDefault()
and handleSend() when not composing.
In `@src/components/chat/ChatPanel.tsx`:
- Line 37: The code computes hasHtml via currentProject.html.length which can
throw if currentProject.html is null/undefined; update the check in ChatPanel
(where hasHtml is defined) to safely handle absent html by testing for a truthy
array/string or using Array.isArray/currentProject.html != null before accessing
length (e.g., compute hasHtml = !!currentProject.html &&
currentProject.html.length > 0 or equivalent), and ensure any downstream logic
that relies on hasHtml still works with the new boolean.
In `@src/hooks/useAiStream.ts`:
- Around line 46-52: The frontend is calling invoke('stream_generate', {
systemPrompt, port: sidecarPort, messages, maxTokens }) but the backend command
stream_generate (in ai.rs) only deserializes messages, so systemPrompt,
maxTokens and port are dropped; fix by updating the backend stream_generate
handler to accept/deserialise the same parameter names (systemPrompt or
system_prompt, maxTokens/max_tokens, and port) and then forward systemPrompt and
maxTokens into the API request (and use the provided port if you want to
override SidecarState), or alternatively align the frontend to only send the
exact fields the backend expects — locate the invoke call
(invoke('stream_generate', ...)) and the stream_generate command in ai.rs and
make their parameter names and usages consistent so system prompt and max tokens
are actually forwarded to the API.
In `@src/lib/prompts.ts`:
- Around line 47-66: buildEditMessages is duplicating the just-stored user
message because addMessage stores the prompt and chatHistory (used as
recentHistory via chatHistory.slice(-6)) already contains that user entry;
update buildEditMessages to filter out any recentHistory entries whose content
exactly equals the provided userPrompt (or otherwise skip the last element when
messages.slice(-6) would include the same user message) before pushing
recentHistory and finally pushing { role: 'user', content: userPrompt };
reference buildEditMessages, chatHistory, recentHistory, messages.slice(-6), and
userPrompt when making the change.
In `@src/stores/projectStore.ts`:
- Around line 104-111: The current createSnapshot flow reads MAX(version) via
db.select and then inserts, which races when concurrent calls run; wrap the
version allocation and insert in a single database transaction (or use a single
atomic INSERT ... SELECT that computes COALESCE(MAX(version),0)+1 for the given
project_id) so two concurrent createSnapshot() calls cannot pick the same
nextVersion. Modify the code around the db.select and db.execute in
createSnapshot (or the surrounding function) to start a transaction, re-read
MAX(version) with a FOR UPDATE/locked read if your DB supports it, compute
nextVersion inside the transaction, perform the INSERT, and commit (or replace
with a single locked INSERT ... SELECT statement) to ensure atomic version
assignment.
---
Nitpick comments:
In `@src-tauri/src/ai.rs`:
- Around line 90-103: The code silently ignores JSON parse errors when calling
serde_json::from_str::<ChatChunkResponse>(data); update the parsing branch
around that call (the block building full_response and emitting "ai-chunk") to
log parse failures in debug builds: capture the Err(e) from serde_json::from_str
and call debug!(...) or eprintln! (depending on project logging conventions)
with the error and the raw data string so developers can inspect malformed SSE
chunks; keep the existing successful-path behavior that pushes to full_response
and emits AiChunk on parsed.choices.
- Around line 47-53: The request uses reqwest::Client::new() with no timeout,
which can hang if the sidecar is unresponsive; replace the client construction
with reqwest::Client::builder() and set a sensible timeout (e.g.,
Duration::from_secs(10)) via .timeout(...) before calling .build(), and ensure
the Duration type is imported; update the code that currently calls
reqwest::Client::new() and the subsequent .post(...).json(...).send().await path
to use the newly built client with the configured timeout.
In `@src-tauri/src/hardware.rs`:
- Around line 61-70: The match in classify_tier contains two final arms both
returning HardwareTier::Minimum, making the second redundant; update
classify_tier to clearly distinguish systems that meet the minimum RAM from
those below it (e.g., change the last arm to return a new
HardwareTier::BelowMinimum variant or remove the duplicate and make the final
fallback return BelowMinimum), and if you add a new variant update the
HardwareTier enum and any match exhaustiveness sites (refer to classify_tier and
HardwareTier when making the change).
In `@src/hooks/useAiStream.ts`:
- Around line 58-61: The catch block in useAiStream.ts currently leaves
streamBuffer populated when an error occurs; add logic to clear the partial
buffer there by invoking a new or existing action (e.g., call
chatStore.clearStreamBuffer() or change chatStore.setStreaming(false, {
clearBuffer: true })) so the partial tokens are removed on error, while still
calling setStreaming(false) to stop streaming; if clearStreamBuffer doesn't
exist, add it to chatStore.ts (or extend setStreaming to accept a clear-on-error
flag) and ensure useAiStream.ts calls that action in the catch branch.
In `@src/lib/prompts.ts`:
- Around line 35-45: The buildGenerateMessages function currently types roles as
a plain string; define a specific ChatMessage type (e.g., type ChatMessage = {
role: 'user' | 'assistant' | 'system'; content: string }) and change
buildGenerateMessages's return type to Array<ChatMessage> (and its internal role
value to the 'user' literal), then update other message builder functions to use
the same ChatMessage type so only valid roles are allowed across the codebase.
In `@src/stores/aiStore.ts`:
- Around line 53-60: The stopSidecar function currently only logs failures and
doesn't update the store error state or clear prior errors on success; update
stopSidecar (the async function that calls invoke('stop_sidecar')) to on success
call set(...) to set sidecarStatus: 'stopped' and clear any existing error
(e.g., sidecarError: null), and on catch call set(...) to record the failure
into sidecarError (include the error message/object) so the UI can surface the
failure instead of just console.error.
- Around line 30-37: The detectHardware function currently only logs failures to
console; update its catch block to set the store's error state (same pattern
used by startSidecar) so the UI can react—use the existing set function to call
set({ error: <error message or Error object> }) and optionally clear
hardwareInfo (e.g., set({ hardwareInfo: null })) when detection fails; ensure
you capture the caught error variable (error) and pass a useful message or the
error itself to the error state.
In `@src/stores/chatStore.ts`:
- Around line 27-34: The loadMessages function replaces the entire messages
array which can mismatch with an ongoing stream: if a user switches projects
while streaming, streamBuffer and finalizeStream may persist and save into the
wrong project. Fix this by ensuring ChatPanel's useEffect (which calls
loadMessages) first checks the chatStore streaming state (or exposes an
isStreaming flag) and either disable project switches during streaming or abort
the active stream before calling loadMessages; alternatively, have ChatPanel
call an exposed abortStream/clearStreamBuffer method on the store (referencing
streamBuffer, finalizeStream, loadMessages, ChatPanel useEffect) to cancel the
stream and clear streamBuffer when projectId changes so streamed content cannot
be saved to the wrong project.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8711e73f-4759-41ec-a500-a114899657be
⛔ Files ignored due to path filters (1)
src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
.gitignorescripts/setup-sidecar.shsrc-tauri/Cargo.tomlsrc-tauri/capabilities/default.jsonsrc-tauri/src/ai.rssrc-tauri/src/hardware.rssrc-tauri/src/lib.rssrc-tauri/src/sidecar.rssrc-tauri/tauri.conf.jsonsrc/components/chat/ChatInput.tsxsrc/components/chat/ChatMessage.tsxsrc/components/chat/ChatPanel.tsxsrc/hooks/useAiStream.tssrc/lib/prompts.tssrc/pages/ProjectPage.tsxsrc/stores/aiStore.tssrc/stores/chatStore.tssrc/stores/projectStore.ts
| // Check if already running | ||
| { | ||
| let child = state.child.lock().map_err(|e| e.to_string())?; | ||
| if child.is_some() { | ||
| return Err("Sidecar is already running".to_string()); | ||
| } | ||
| } | ||
|
|
||
| let port = port.unwrap_or(8080); | ||
|
|
||
| let (mut rx, child) = app | ||
| .shell() | ||
| .sidecar("llama-server") | ||
| .map_err(|e| format!("Failed to create sidecar command: {}", e))? | ||
| .args([ | ||
| "-m", | ||
| &model_path, | ||
| "--port", | ||
| &port.to_string(), | ||
| "--host", | ||
| "127.0.0.1", | ||
| "-ngl", | ||
| "99", | ||
| ]) | ||
| .spawn() | ||
| .map_err(|e| format!("Failed to spawn sidecar: {}", e))?; | ||
|
|
||
| // Store child process and port | ||
| { | ||
| let mut child_lock = state.child.lock().map_err(|e| e.to_string())?; | ||
| *child_lock = Some(child); | ||
| } | ||
| { | ||
| let mut port_lock = state.port.lock().map_err(|e| e.to_string())?; | ||
| *port_lock = port; | ||
| } |
There was a problem hiding this comment.
TOCTOU race condition: concurrent starts can bypass the "already running" check.
The lock is released after checking child.is_some() (line 40) and before storing the new child (line 64). Two concurrent calls can both pass the check and spawn duplicate sidecars.
Hold the lock across the spawn or use a more atomic approach:
🔒 Proposed fix using a single lock scope
- // Check if already running
- {
- let child = state.child.lock().map_err(|e| e.to_string())?;
- if child.is_some() {
- return Err("Sidecar is already running".to_string());
- }
- }
-
- let port = port.unwrap_or(8080);
-
- let (mut rx, child) = app
+ let port = port.unwrap_or(8080);
+
+ // Lock held across check and store to prevent race
+ let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
+ if child_lock.is_some() {
+ return Err("Sidecar is already running".to_string());
+ }
+
+ let (mut rx, child) = app
.shell()
.sidecar("llama-server")
.map_err(|e| format!("Failed to create sidecar command: {}", e))?
@@ ..snip..
.spawn()
.map_err(|e| format!("Failed to spawn sidecar: {}", e))?;
- // Store child process and port
- {
- let mut child_lock = state.child.lock().map_err(|e| e.to_string())?;
- *child_lock = Some(child);
- }
+ *child_lock = Some(child);
+ drop(child_lock); // Release before async polling
+
{
let mut port_lock = state.port.lock().map_err(|e| e.to_string())?;
*port_lock = port;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if already running | |
| { | |
| let child = state.child.lock().map_err(|e| e.to_string())?; | |
| if child.is_some() { | |
| return Err("Sidecar is already running".to_string()); | |
| } | |
| } | |
| let port = port.unwrap_or(8080); | |
| let (mut rx, child) = app | |
| .shell() | |
| .sidecar("llama-server") | |
| .map_err(|e| format!("Failed to create sidecar command: {}", e))? | |
| .args([ | |
| "-m", | |
| &model_path, | |
| "--port", | |
| &port.to_string(), | |
| "--host", | |
| "127.0.0.1", | |
| "-ngl", | |
| "99", | |
| ]) | |
| .spawn() | |
| .map_err(|e| format!("Failed to spawn sidecar: {}", e))?; | |
| // Store child process and port | |
| { | |
| let mut child_lock = state.child.lock().map_err(|e| e.to_string())?; | |
| *child_lock = Some(child); | |
| } | |
| { | |
| let mut port_lock = state.port.lock().map_err(|e| e.to_string())?; | |
| *port_lock = port; | |
| } | |
| let port = port.unwrap_or(8080); | |
| // Lock held across check and store to prevent race | |
| let mut child_lock = state.child.lock().map_err(|e| e.to_string())?; | |
| if child_lock.is_some() { | |
| return Err("Sidecar is already running".to_string()); | |
| } | |
| let (mut rx, child) = app | |
| .shell() | |
| .sidecar("llama-server") | |
| .map_err(|e| format!("Failed to create sidecar command: {}", e))? | |
| .args([ | |
| "-m", | |
| &model_path, | |
| "--port", | |
| &port.to_string(), | |
| "--host", | |
| "127.0.0.1", | |
| "-ngl", | |
| "99", | |
| ]) | |
| .spawn() | |
| .map_err(|e| format!("Failed to spawn sidecar: {}", e))?; | |
| *child_lock = Some(child); | |
| drop(child_lock); // Release before async polling | |
| { | |
| let mut port_lock = state.port.lock().map_err(|e| e.to_string())?; | |
| *port_lock = port; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/sidecar.rs` around lines 34 - 69, The current TOCTOU happens
because you release the Mutex from state.child between checking child.is_some()
and assigning the spawned process, allowing concurrent starts; fix by making the
check-and-set atomic: either hold the same lock across the spawn call so no
other task can pass the check (wrap the sidecar().spawn() while holding
state.child.lock()), or use a reservation pattern where you acquire
state.child.lock(), check is_some(), set a temporary placeholder/enum (e.g.,
Starting) to mark that a start is in progress, drop the lock, perform
sidecar("llama-server").spawn(), then reacquire the lock and replace the
placeholder with the real child (or clear it on spawn error); update references
to state.child.lock(), sidecar("llama-server"), spawn(), rx/child accordingly.
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 10 file(s) based on 13 unresolved review comments. A stacked PR containing fixes has been created.
Time taken: |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md`:
- Around line 16-17: The markdown tables (e.g., the table starting with "| File
| Responsibility |") violate MD058; add a blank line immediately before and
after each table instance in this document so both tables are surrounded by
blank lines (this will satisfy MD058 and the docs CI).
- Around line 815-816: Edit the documentation text describing generateSection so
it uses the compound modifier "full-page HTML" (hyphenated) for clarity;
specifically update the sentence that reads about sending the section's HTML and
replacing the section in the full page HTML to use "full-page HTML" and ensure
references to the generateSection method and the editSection system prompt
remain accurate.
- Around line 934-935: previewIframeRef is declared but never attached to the
PreviewFrame and the replace message uses the stale sectionHtml instead of the
newly generated section result; attach previewIframeRef as the ref prop on the
PreviewFrame component and update all replace message payloads (the code that
sends the "replace" message to the iframe) to use the generated section result
variable (the variable that holds the new HTML after generation) instead of
sectionHtml; apply the same fix to the other occurrences noted (around the other
replace usages) so the iframe receives the fresh content via
previewIframeRef.current.postMessage.
- Around line 192-200: The postMessage usage is currently broadcasting with '*'
and your message handler accepts any message with a matching type, which is
unsafe; replace the wildcard with a strict target origin when calling
window.parent.postMessage (use a configured allowed origin) and harden the
message handler(s) that consume messages by validating event.origin against an
allowlist, ensuring event.source === window.parent (or the expected Window
reference), and performing schema/type guards on event.data (check
event.data.type and required payload fields like id, tagName, outerHtml) before
acting; update the code paths around the postMessage that sends type
'offpage:section-selected' and the corresponding message event listener(s) that
branch on currentMode/wysiwyg to enforce these origin/source checks and payload
validation.
In `@src/components/preview/InlineEditBar.tsx`:
- Around line 9-30: The component allows duplicate submissions because
handleSubmit/fire-onSubmit is fire-and-forget; add a local isSubmitting state in
InlineEditBar and guard submissions (return early if isSubmitting), set
isSubmitting = true before calling onSubmit(selectedSection.id,
selectedSection.outerHtml, text) and await it, only clear the prompt on
successful completion, and in a finally block set isSubmitting = false so
subsequent attempts are allowed; also have handleKeyDown ignore Enter when
isSubmitting and ensure clearSelection still works on Escape. Reference:
InlineEditBar, handleSubmit, handleKeyDown, onSubmit, selectedSection,
clearSelection, setPrompt.
In `@src/components/preview/PreviewFrame.tsx`:
- Around line 57-64: When handling 'offpage:ready' in PreviewFrame.tsx, after
confirming iframeRef.current, additionally request the helper-free, tagged DOM
(the version with data-offpage-id added by detectSections) from the iframe via
the existing bridge instead of only resending the mode; then persist that
returned HTML into the app state so currentProject.html contains those
data-offpage-id attributes before edits are enabled (this ensures later calls to
replaceSectionInHtml in useAiStream.ts and elsewhere are not no-ops).
Concretely: in the case 'offpage:ready' branch (where sendToIframe is used),
send a message to ask for the full tagged document, handle the response event to
receive the HTML string, and update the project store/currentProject.html with
that payload prior to allowing inline/WYSIWYG edits or calling
replaceSectionInHtml.
- Around line 45-54: The handler for 'offpage:wysiwyg-done' is
fire-and-forgetting the async updateProjectHtml call; change it to handle
promise rejections by awaiting or chaining a .catch on updateProjectHtml (in
PreviewFrame.tsx) and react to failures (log via your logger, surface an error
to the UI, and/or revert the iframe edit by reloading currentProject.html) so DB
write errors from updateProjectHtml don't silently revert on next load; keep the
same usage of ensureSectionId and replaceSectionInHtml to build updatedHtml but
ensure updateProjectHtml(currentProject.id, updatedHtml) is awaited/handled and
failures are reported and recovered from.
- Around line 31-35: The handleMessage callback currently accepts any
MessageEvent whose data.type startsWith('offpage:') which allows spoofed
messages; update handleMessage to also validate the sender by checking
event.source === iframeRef.current?.contentWindow (the iframe created via
srcDoc) before processing 'offpage:*' messages so only the preview iframe can
trigger actions like offpage:wysiwyg-done; locate the handler named
handleMessage (useCallback) and add this event.source check early and return if
it fails.
In `@src/lib/helperScript.ts`:
- Around line 78-103: The onClick handler currently calls e.preventDefault() and
e.stopPropagation() unconditionally which blocks native caret placement when
editing; update onClick so that preventDefault()/stopPropagation() are skipped
for WYSIWYG editing: move or guard the calls so they run only when currentMode
!== 'wysiwyg' (or when the clicked section is not the active contentEditable
element), then proceed to set selectedEl, positionOverlay and call
enableContentEditable(section) as before; ensure enableContentEditable(section)
is called before any logic that might re-attach capture listeners so native
clicks and selections inside the contentEditable section are allowed.
In `@src/lib/htmlSections.ts`:
- Around line 19-22: The current openTagRegex (and the replacement logic around
lines 66-70) is too strict: it only allows \w+ tag names and double-quoted
attributes and assumes the element starts at byte 0, so indented output, custom
elements (e.g. hero-banner), or single-quoted attributes are missed or
duplicated. Update the matching and replacement to use a more permissive,
case-insensitive/singleline regex that matches valid HTML tag names (e.g.
[A-Za-z][A-Za-z0-9:-]*), accepts single or double quotes for data-offpage-id,
and does not require the tag to be at the file start; then when replacing,
search for an existing data-offpage-id attribute with a robust attribute regex
and replace its value if present, otherwise insert the attribute into the
existing opening tag while preserving surrounding whitespace/indentation and
quote style; apply this change to openTagRegex and the attribute-replace/insert
logic referenced around the code that handles the first element (the block
currently at lines ~66-70).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4a49bd9f-7164-4280-94e5-33aad1db5c11
📒 Files selected for processing (12)
CLAUDE.mddocs/superpowers/plans/2026-04-02-phase3-preview-editing.mdsrc/components/chat/ChatPanel.tsxsrc/components/preview/InlineEditBar.tsxsrc/components/preview/PreviewFrame.tsxsrc/components/preview/PreviewToolbar.tsxsrc/hooks/useAiStream.tssrc/lib/helperScript.tssrc/lib/htmlSections.tssrc/lib/iframeBridge.tssrc/pages/ProjectPage.tsxsrc/stores/editorStore.ts
✅ Files skipped from review due to trivial changes (1)
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (2)
- src/components/chat/ChatPanel.tsx
- src/hooks/useAiStream.ts
| | File | Responsibility | | ||
| |------|---------------| |
There was a problem hiding this comment.
Add blank lines around markdown tables (MD058).
Both tables should be surrounded by blank lines to satisfy markdownlint and keep docs CI clean.
Also applies to: 27-28
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 16-16: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 16
- 17, The markdown tables (e.g., the table starting with "| File |
Responsibility |") violate MD058; add a blank line immediately before and after
each table instance in this document so both tables are surrounded by blank
lines (this will satisfy MD058 and the docs CI).
| window.parent.postMessage({ | ||
| type: 'offpage:section-selected', | ||
| payload: { | ||
| id: section.getAttribute('data-offpage-id'), | ||
| tagName: section.tagName, | ||
| outerHtml: section.outerHTML | ||
| } | ||
| }, '*'); | ||
| } else if (currentMode === 'wysiwyg') { |
There was a problem hiding this comment.
Harden postMessage contract (origin/source validation missing).
The snippets currently use '*' and process any message with matching type. Add strict event.source and origin checks (and schema guards) before acting on messages to reduce cross-window message injection risk.
Also applies to: 230-237, 243-250, 264-305, 368-370
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 192
- 200, The postMessage usage is currently broadcasting with '*' and your message
handler accepts any message with a matching type, which is unsafe; replace the
wildcard with a strict target origin when calling window.parent.postMessage (use
a configured allowed origin) and harden the message handler(s) that consume
messages by validating event.origin against an allowlist, ensuring event.source
=== window.parent (or the expected Window reference), and performing schema/type
guards on event.data (check event.data.type and required payload fields like id,
tagName, outerHtml) before acting; update the code paths around the postMessage
that sends type 'offpage:section-selected' and the corresponding message event
listener(s) that branch on currentMode/wysiwyg to enforce these origin/source
checks and payload validation.
| Add a `generateSection` method that sends only the section's HTML to the AI with the `editSection` system prompt, then replaces the section in the full page HTML. | ||
|
|
There was a problem hiding this comment.
Use “full-page HTML” as a compound modifier.
Minor wording fix for readability/grammar.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~815-~815: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...rompt, then replaces the section in the full page HTML. - [ ] **Step 1: Update useAiStre...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 815
- 816, Edit the documentation text describing generateSection so it uses the
compound modifier "full-page HTML" (hyphenated) for clarity; specifically update
the sentence that reads about sending the section's HTML and replacing the
section in the full page HTML to use "full-page HTML" and ensure references to
the generateSection method and the editSection system prompt remain accurate.
| const previewIframeRef = useRef<HTMLIFrameElement>(null); | ||
|
|
There was a problem hiding this comment.
Inline edit preview update path is broken in the current plan snippet.
previewIframeRef is never attached to PreviewFrame, and the replace message uses sectionHtml (old HTML) instead of the generated section result. This will either no-op or overwrite with stale content.
Proposed fix (plan snippet adjustment)
- const previewIframeRef = useRef<HTMLIFrameElement>(null);
+ const previewIframeRef = useRef<HTMLIFrameElement>(null);
...
- <PreviewFrame />
+ <PreviewFrame iframeRef={previewIframeRef} />
...
- payload: { id: sectionId, newHtml: sectionHtml },
+ payload: { id: sectionId, newHtml: result },- export function PreviewFrame() {
- const iframeRef = useRef<HTMLIFrameElement>(null);
+ interface PreviewFrameProps {
+ iframeRef?: React.RefObject<HTMLIFrameElement | null>;
+ }
+ export function PreviewFrame({ iframeRef: externalRef }: PreviewFrameProps) {
+ const internalRef = useRef<HTMLIFrameElement>(null);
+ const iframeRef = externalRef ?? internalRef;Also applies to: 959-963, 976-977
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-02-phase3-preview-editing.md` around lines 934
- 935, previewIframeRef is declared but never attached to the PreviewFrame and
the replace message uses the stale sectionHtml instead of the newly generated
section result; attach previewIframeRef as the ref prop on the PreviewFrame
component and update all replace message payloads (the code that sends the
"replace" message to the iframe) to use the generated section result variable
(the variable that holds the new HTML after generation) instead of sectionHtml;
apply the same fix to the other occurrences noted (around the other replace
usages) so the iframe receives the fresh content via
previewIframeRef.current.postMessage.
| export function InlineEditBar({ onSubmit, disabled }: InlineEditBarProps) { | ||
| const [prompt, setPrompt] = useState(''); | ||
| const { selectedSection, editMode, clearSelection } = useEditorStore(); | ||
|
|
||
| if (editMode !== 'inline' || !selectedSection) return null; | ||
|
|
||
| const handleSubmit = () => { | ||
| const text = prompt.trim(); | ||
| if (!text) return; | ||
| onSubmit(selectedSection.id, selectedSection.outerHtml, text); | ||
| setPrompt(''); | ||
| }; | ||
|
|
||
| const handleKeyDown = (e: React.KeyboardEvent) => { | ||
| if (e.key === 'Enter' && !e.shiftKey) { | ||
| e.preventDefault(); | ||
| handleSubmit(); | ||
| } | ||
| if (e.key === 'Escape') { | ||
| clearSelection(); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Prevent duplicate inline edits while the request is starting.
Lines 18-19 clear the prompt and fire-and-forget onSubmit. A fast double-click/Enter can enqueue multiple AI requests before streaming flips, and a rejected submit loses the user's text.
⏳ Proposed fix
interface InlineEditBarProps {
- onSubmit: (sectionId: string, sectionHtml: string, prompt: string) => void;
+ onSubmit: (sectionId: string, sectionHtml: string, prompt: string) => Promise<void>;
disabled: boolean;
}
export function InlineEditBar({ onSubmit, disabled }: InlineEditBarProps) {
const [prompt, setPrompt] = useState('');
+ const [submitting, setSubmitting] = useState(false);
const { selectedSection, editMode, clearSelection } = useEditorStore();
@@
- const handleSubmit = () => {
+ const handleSubmit = async () => {
const text = prompt.trim();
- if (!text) return;
- onSubmit(selectedSection.id, selectedSection.outerHtml, text);
- setPrompt('');
+ if (!text || submitting) return;
+ setSubmitting(true);
+ try {
+ await onSubmit(selectedSection.id, selectedSection.outerHtml, text);
+ setPrompt('');
+ } finally {
+ setSubmitting(false);
+ }
};
@@
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
- handleSubmit();
+ void handleSubmit();
}
@@
- disabled={disabled}
+ disabled={disabled || submitting}
@@
- onClick={handleSubmit}
- disabled={disabled || !prompt.trim()}
+ onClick={() => void handleSubmit()}
+ disabled={disabled || submitting || !prompt.trim()}Also applies to: 42-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/preview/InlineEditBar.tsx` around lines 9 - 30, The component
allows duplicate submissions because handleSubmit/fire-onSubmit is
fire-and-forget; add a local isSubmitting state in InlineEditBar and guard
submissions (return early if isSubmitting), set isSubmitting = true before
calling onSubmit(selectedSection.id, selectedSection.outerHtml, text) and await
it, only clear the prompt on successful completion, and in a finally block set
isSubmitting = false so subsequent attempts are allowed; also have handleKeyDown
ignore Enter when isSubmitting and ensure clearSelection still works on Escape.
Reference: InlineEditBar, handleSubmit, handleKeyDown, onSubmit,
selectedSection, clearSelection, setPrompt.
| const handleMessage = useCallback( | ||
| (event: MessageEvent<IframeToParentMessage>) => { | ||
| const data = event.data; | ||
| if (!data || !data.type || !data.type.startsWith('offpage:')) return; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the preview message handler currently accepts offpage:* messages
# without checking the sender window.
rg -n -C3 "addEventListener\\('message'|startsWith\\('offpage:'\\)|event\\.source|event\\.origin" src/components/preview/PreviewFrame.tsxRepository: Szymon0C/Offpage
Length of output: 547
🏁 Script executed:
cat -n src/components/preview/PreviewFrame.tsx | head -100Repository: Szymon0C/Offpage
Length of output: 4142
🏁 Script executed:
rg -n "iframe|srcDoc|sandbox" src/components/preview/PreviewFrame.tsxRepository: Szymon0C/Offpage
Length of output: 754
🏁 Script executed:
rg -n "offpage:" src/components/preview/PreviewFrame.tsx | head -20Repository: Szymon0C/Offpage
Length of output: 335
Add event.source validation to the message handler.
Line 34 accepts offpage:* messages from any origin without validating the sender. This allows external frames, windows, or browser extensions to spoof iframe-internal messages like offpage:wysiwyg-done, triggering unintended section edits and database persists. Since the iframe is created with srcDoc (line 86), you can safely validate event.source === iframeRef.current?.contentWindow to ensure only the preview iframe can drive these operations.
Proposed fix
const handleMessage = useCallback(
(event: MessageEvent<IframeToParentMessage>) => {
+ if (event.source !== iframeRef.current?.contentWindow) return;
const data = event.data;
if (!data || !data.type || !data.type.startsWith('offpage:')) return;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleMessage = useCallback( | |
| (event: MessageEvent<IframeToParentMessage>) => { | |
| const data = event.data; | |
| if (!data || !data.type || !data.type.startsWith('offpage:')) return; | |
| const handleMessage = useCallback( | |
| (event: MessageEvent<IframeToParentMessage>) => { | |
| if (event.source !== iframeRef.current?.contentWindow) return; | |
| const data = event.data; | |
| if (!data || !data.type || !data.type.startsWith('offpage:')) return; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/preview/PreviewFrame.tsx` around lines 31 - 35, The
handleMessage callback currently accepts any MessageEvent whose data.type
startsWith('offpage:') which allows spoofed messages; update handleMessage to
also validate the sender by checking event.source ===
iframeRef.current?.contentWindow (the iframe created via srcDoc) before
processing 'offpage:*' messages so only the preview iframe can trigger actions
like offpage:wysiwyg-done; locate the handler named handleMessage (useCallback)
and add this event.source check early and return if it fails.
| case 'offpage:wysiwyg-done': | ||
| if (currentProject) { | ||
| const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id); | ||
| const updatedHtml = replaceSectionInHtml( | ||
| currentProject.html, | ||
| data.payload.id, | ||
| newSectionHtml | ||
| ); | ||
| updateProjectHtml(currentProject.id, updatedHtml); | ||
| } |
There was a problem hiding this comment.
Handle WYSIWYG save failures instead of dropping the promise.
updateProjectHtml is async in src/stores/projectStore.ts:73-89, but Line 53 fire-and-forgets it. If the DB write rejects, the iframe already shows the edit while the persisted project quietly reverts on the next load.
🧯 Proposed fix
- updateProjectHtml(currentProject.id, updatedHtml);
+ void updateProjectHtml(currentProject.id, updatedHtml).catch((error) => {
+ console.error('Failed to persist WYSIWYG edit:', error);
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'offpage:wysiwyg-done': | |
| if (currentProject) { | |
| const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id); | |
| const updatedHtml = replaceSectionInHtml( | |
| currentProject.html, | |
| data.payload.id, | |
| newSectionHtml | |
| ); | |
| updateProjectHtml(currentProject.id, updatedHtml); | |
| } | |
| case 'offpage:wysiwyg-done': | |
| if (currentProject) { | |
| const newSectionHtml = ensureSectionId(data.payload.outerHtml, data.payload.id); | |
| const updatedHtml = replaceSectionInHtml( | |
| currentProject.html, | |
| data.payload.id, | |
| newSectionHtml | |
| ); | |
| void updateProjectHtml(currentProject.id, updatedHtml).catch((error) => { | |
| console.error('Failed to persist WYSIWYG edit:', error); | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/preview/PreviewFrame.tsx` around lines 45 - 54, The handler
for 'offpage:wysiwyg-done' is fire-and-forgetting the async updateProjectHtml
call; change it to handle promise rejections by awaiting or chaining a .catch on
updateProjectHtml (in PreviewFrame.tsx) and react to failures (log via your
logger, surface an error to the UI, and/or revert the iframe edit by reloading
currentProject.html) so DB write errors from updateProjectHtml don't silently
revert on next load; keep the same usage of ensureSectionId and
replaceSectionInHtml to build updatedHtml but ensure
updateProjectHtml(currentProject.id, updatedHtml) is awaited/handled and
failures are reported and recovered from.
| case 'offpage:ready': | ||
| // iframe loaded, send current mode | ||
| if (iframeRef.current) { | ||
| sendToIframe(iframeRef.current, { | ||
| type: 'offpage:set-mode', | ||
| payload: { mode: editMode }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Persist generated data-offpage-ids when the iframe becomes ready.
detectSections() in src/lib/helperScript.ts:18-30 adds ids only inside the iframe DOM before it posts offpage:ready. Lines 60-63 just resend the mode, so currentProject.html stays id-less and the later replaceSectionInHtml(currentProject.html, id, ...) calls here and in src/hooks/useAiStream.ts:72-117 no-op on the first inline/WYSIWYG edit. Use the existing bridge to sync a helper-free copy of the tagged DOM back into the store before edits are enabled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/preview/PreviewFrame.tsx` around lines 57 - 64, When handling
'offpage:ready' in PreviewFrame.tsx, after confirming iframeRef.current,
additionally request the helper-free, tagged DOM (the version with
data-offpage-id added by detectSections) from the iframe via the existing bridge
instead of only resending the mode; then persist that returned HTML into the app
state so currentProject.html contains those data-offpage-id attributes before
edits are enabled (this ensures later calls to replaceSectionInHtml in
useAiStream.ts and elsewhere are not no-ops). Concretely: in the case
'offpage:ready' branch (where sendToIframe is used), send a message to ask for
the full tagged document, handle the response event to receive the HTML string,
and update the project store/currentProject.html with that payload prior to
allowing inline/WYSIWYG edits or calling replaceSectionInHtml.
| function onClick(e) { | ||
| if (currentMode === 'view') return; | ||
| e.preventDefault(); | ||
| e.stopPropagation(); | ||
| const section = findParentSection(e.target); | ||
| if (!section) return; | ||
|
|
||
| selectedEl = section; | ||
| positionOverlay(section); | ||
|
|
||
| if (currentMode === 'inline') { | ||
| overlay.style.borderColor = '#6366f1'; | ||
| overlay.style.backgroundColor = 'rgba(99,102,241,0.05)'; | ||
| window.parent.postMessage({ | ||
| type: 'offpage:section-selected', | ||
| payload: { | ||
| id: section.getAttribute('data-offpage-id'), | ||
| tagName: section.tagName, | ||
| outerHtml: section.outerHTML | ||
| } | ||
| }, '*'); | ||
| } else if (currentMode === 'wysiwyg') { | ||
| overlay.style.borderColor = '#f59e0b'; | ||
| overlay.style.backgroundColor = 'rgba(245,158,11,0.05)'; | ||
| enableContentEditable(section); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the helper still intercepts all clicks in capture phase even after enabling contentEditable.
rg -n -C2 "addEventListener\\('click', onClick, true\\)|preventDefault\\(|contentEditable = 'true'" src/lib/helperScript.tsRepository: Szymon0C/Offpage
Length of output: 660
🏁 Script executed:
#!/bin/bash
# Get broader context around the onClick function and check the addEventListener calls
sed -n '70,115p' src/lib/helperScript.tsRepository: Szymon0C/Offpage
Length of output: 1465
🏁 Script executed:
#!/bin/bash
# Check what's around lines 208-210 mentioned as "Also applies to"
sed -n '200,220p' src/lib/helperScript.tsRepository: Szymon0C/Offpage
Length of output: 500
🏁 Script executed:
#!/bin/bash
# Search for other click or input handlers that might interact with this
rg -n "addEventListener.*click|addEventListener.*input|addEventListener.*mousedown|addEventListener.*mouseup" src/lib/helperScript.tsRepository: Szymon0C/Offpage
Length of output: 179
Don't preventDefault() once the selected section is being edited.
The capture-phase body listener blocks every click in WYSIWYG mode. After the section is made contentEditable on line 112, the prevented clicks block default caret positioning and text selection inside the active section, making precise edits impossible.
✍️ Proposed fix
function onClick(e) {
if (currentMode === 'view') return;
- e.preventDefault();
- e.stopPropagation();
const section = findParentSection(e.target);
if (!section) return;
+
+ const editingSelectedSection =
+ currentMode === 'wysiwyg' && selectedEl === section && section.isContentEditable;
+ if (!editingSelectedSection) {
+ e.preventDefault();
+ e.stopPropagation();
+ }
selectedEl = section;
positionOverlay(section);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function onClick(e) { | |
| if (currentMode === 'view') return; | |
| e.preventDefault(); | |
| e.stopPropagation(); | |
| const section = findParentSection(e.target); | |
| if (!section) return; | |
| selectedEl = section; | |
| positionOverlay(section); | |
| if (currentMode === 'inline') { | |
| overlay.style.borderColor = '#6366f1'; | |
| overlay.style.backgroundColor = 'rgba(99,102,241,0.05)'; | |
| window.parent.postMessage({ | |
| type: 'offpage:section-selected', | |
| payload: { | |
| id: section.getAttribute('data-offpage-id'), | |
| tagName: section.tagName, | |
| outerHtml: section.outerHTML | |
| } | |
| }, '*'); | |
| } else if (currentMode === 'wysiwyg') { | |
| overlay.style.borderColor = '#f59e0b'; | |
| overlay.style.backgroundColor = 'rgba(245,158,11,0.05)'; | |
| enableContentEditable(section); | |
| } | |
| function onClick(e) { | |
| if (currentMode === 'view') return; | |
| const section = findParentSection(e.target); | |
| if (!section) return; | |
| const editingSelectedSection = | |
| currentMode === 'wysiwyg' && selectedEl === section && section.isContentEditable; | |
| if (!editingSelectedSection) { | |
| e.preventDefault(); | |
| e.stopPropagation(); | |
| } | |
| selectedEl = section; | |
| positionOverlay(section); | |
| if (currentMode === 'inline') { | |
| overlay.style.borderColor = '#6366f1'; | |
| overlay.style.backgroundColor = 'rgba(99,102,241,0.05)'; | |
| window.parent.postMessage({ | |
| type: 'offpage:section-selected', | |
| payload: { | |
| id: section.getAttribute('data-offpage-id'), | |
| tagName: section.tagName, | |
| outerHtml: section.outerHTML | |
| } | |
| }, '*'); | |
| } else if (currentMode === 'wysiwyg') { | |
| overlay.style.borderColor = '#f59e0b'; | |
| overlay.style.backgroundColor = 'rgba(245,158,11,0.05)'; | |
| enableContentEditable(section); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/helperScript.ts` around lines 78 - 103, The onClick handler currently
calls e.preventDefault() and e.stopPropagation() unconditionally which blocks
native caret placement when editing; update onClick so that
preventDefault()/stopPropagation() are skipped for WYSIWYG editing: move or
guard the calls so they run only when currentMode !== 'wysiwyg' (or when the
clicked section is not the active contentEditable element), then proceed to set
selectedEl, positionOverlay and call enableContentEditable(section) as before;
ensure enableContentEditable(section) is called before any logic that might
re-attach capture listeners so native clicks and selections inside the
contentEditable section are allowed.
| const openTagRegex = new RegExp( | ||
| `(<(\\w+)[^>]*data-offpage-id\\s*=\\s*"${escapedId}"[^>]*>)`, | ||
| 's' | ||
| ); |
There was a problem hiding this comment.
Make section ID handling resilient to real HTML output.
Line 20 only matches \w+ tag names and a double-quoted data-offpage-id, while Lines 66-70 assume the first element starts at byte 0 and that the existing attribute already has one exact spelling. Indented model output, single-quoted attributes, or custom elements like <hero-banner> will silently skip replacement or duplicate the id, which breaks subsequent section edits.
🛠️ Proposed fix
const openTagRegex = new RegExp(
- `(<(\\w+)[^>]*data-offpage-id\\s*=\\s*"${escapedId}"[^>]*>)`,
+ `(<([A-Za-z][A-Za-z0-9:-]*)[^>]*data-offpage-id\\s*=\\s*["']${escapedId}["'][^>]*>)`,
's'
);
@@
export function ensureSectionId(sectionHtml: string, sectionId: string): string {
- // Check if data-offpage-id already present
- if (sectionHtml.includes(`data-offpage-id="${sectionId}"`)) {
- return sectionHtml;
- }
- // Add to first opening tag
- return sectionHtml.replace(/^(<\w+)/, `$1 data-offpage-id="${sectionId}"`);
+ const template = document.createElement('template');
+ template.innerHTML = sectionHtml.trim();
+ const root = template.content.firstElementChild;
+ if (!root) return sectionHtml;
+ root.setAttribute('data-offpage-id', sectionId);
+ return root.outerHTML;
}Also applies to: 64-70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/htmlSections.ts` around lines 19 - 22, The current openTagRegex (and
the replacement logic around lines 66-70) is too strict: it only allows \w+ tag
names and double-quoted attributes and assumes the element starts at byte 0, so
indented output, custom elements (e.g. hero-banner), or single-quoted attributes
are missed or duplicated. Update the matching and replacement to use a more
permissive, case-insensitive/singleline regex that matches valid HTML tag names
(e.g. [A-Za-z][A-Za-z0-9:-]*), accepts single or double quotes for
data-offpage-id, and does not require the tag to be at the file start; then when
replacing, search for an existing data-offpage-id attribute with a robust
attribute regex and replace its value if present, otherwise insert the attribute
into the existing opening tag while preserving surrounding
whitespace/indentation and quote style; apply this change to openTagRegex and
the attribute-replace/insert logic referenced around the code that handles the
first element (the block currently at lines ~66-70).
- setup-sidecar.sh: fix rustc --print host-tuple (was host-triple) - capabilities: scope shell permissions to llama-server only - ai.rs: add system_prompt and max_tokens as proper command params - sidecar.rs: fix race condition with atomic check+spawn, Arc<Mutex> for child, cleanup on timeout, clear child on termination - tauri.conf.json: run setup-sidecar.sh in beforeBuildCommand - ChatInput: IME composition handling, focus ring, aria-label - ChatPanel: null safety check on html - useAiStream: use snake_case params matching Rust API - prompts.ts: filter duplicate user prompt from chat history - projectStore: wrap snapshot version in transaction Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Phase 2: AI Core
Phase 3: Preview & Editing
Test plan
Generated with Claude Code
Summary by CodeRabbit
New Features
Chores
Documentation