Skip to content

feat: complete Phase 2 - API server, Dashboard, and CVE sync#5

Open
chojuninengu wants to merge 19 commits intomainfrom
feat/cve-sync-and-config
Open

feat: complete Phase 2 - API server, Dashboard, and CVE sync#5
chojuninengu wants to merge 19 commits intomainfrom
feat/cve-sync-and-config

Conversation

@chojuninengu
Copy link
Copy Markdown
Member

@chojuninengu chojuninengu commented Apr 6, 2026

This PR finalizes the Phase 2 development milestone for Zenvra. Key highlights include:

🚀 Features

  • Axum API Server: New Rust-based backend handles scan requests, SSE streaming of results, and integration with Redis/PostgreSQL.
  • SvelteKit 5 Dashboard: A premium web UI for triggering scans, viewing history, and managing vulnerability findings.
  • CVE Sync & Lookup: Backend logic for syncing vulnerability data from NVD/OSV and mapping scan results to CVE IDs.
  • Secrets Detection: Enhanced secrets detection engine with regex-based pattern matching and redaction.

🛠️ Improvements

  • Performance: Optimized regex initialization in the scanner using OnceLock.
  • Developer Experience: Updated project configuration with ESLint flat config, Vite optimization, and strict TypeScript settings.
  • Documentation: Updated AGENTS.md and added a CHANGELOG.md to track progress.

📦 Infrastructure

  • Docker Compose: Added configuration for PostgreSQL, Redis, and the Zenvra services for local development.
  • Migrations: Included SQL migrations for the vulnerability database schema.

Created by Antigravity AI

Summary by CodeRabbit

  • New Features

    • Scan History page, CVE Settings (manual sync), AI Settings, and AI model discovery
    • Real-time scan streaming with live progress and incremental findings (including workspace scans)
    • VS Code sidebar UI with diagnostics and incremental scan updates
  • Enhancements

    • Navigation links enabled; improved deployment images and compose services
    • Improved AI config persistence and shared settings across web and extension
  • Chores

    • Version bumped to 0.1.1-rc.2

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 6, 2026

📝 Walkthrough

Walkthrough

Scans were reworked to run asynchronously with SSE streaming; server persistence, CVE synchronization, AI model listing, and DB migrations were added; web UI, VS Code extension, CLI config, Dockerfiles, GitHub Actions for GHCR publishing, and related build scripts were introduced or updated.

Changes

Cohort / File(s) Summary
Workflows
​.github/workflows/release.yml, ​.github/workflows/ci.yml
Added permissions.packages: write, new docker-publish job (login to ghcr, build/push API & Web images with cache), and a deploy job gated to main; broadened CI push branches.
Containers & Compose
crates/server/Dockerfile, apps/web/Dockerfile, docker-compose.yml
Added multi-stage Dockerfiles for server and web; added zenvra-api/zenvra-web services, adjusted host Postgres port mapping, and set envs/depends_on.
Database Migrations
migrations/202604040001_create_vulnerabilities_table.sql, migrations/202604040002_create_scans_table.sql, migrations/202604060001_add_osv_fields.sql
New vulnerabilities, scans, scan_results tables; trigram/GIN index; OSV columns and ecosystem/package index.
Server & CVE Sync
crates/server/src/main.rs, crates/server/src/cve_sync/mod.rs, crates/server/Cargo.toml
Converted scan endpoint to spawn background scan tasks with UUID + broadcast channels, added SSE events endpoint, history/sync/ai-models endpoints, DB pool/migrations and NVD/OSV sync logic; added new dependencies.
Scanner & Streaming
crates/scanner/src/{lib.rs,engine.rs,finding.rs,ai/*.rs,mod.rs}
Introduced ScanEvent enum, run_stream/scan_stream streaming APIs, workspace streaming, AI provider list_models implementations, Serde attribute changes, and provider factory signature updated to return Arc.
CLI Config
crates/cli/src/main.rs
Added persistent ZenvraConfig (load/save), config CLI subcommand with set/show, changed AI config precedence and updated default Anthropic model.
Web App (UI & Store)
apps/web/src/routes/..., apps/web/src/lib/*, apps/web/package.json, apps/web/svelte.config.js, apps/web/Dockerfile
Added History, AI Settings, CVE Settings pages; shared aiConfig store; scan page switched to SSE-driven flow; added API helpers (getHistory, triggerSync, fetchAiModels); bumped version and switched to adapter-node.
VS Code Extension & Sidebar
extensions/vscode/src/*, extensions/vscode/package.json, extensions/vscode/*.{cjs,LICENSE}
Implemented full workspace and single-file SSE-driven scans with incremental diagnostics, added SidebarProvider webview, typed request/response models, new extension settings/UI, packaging scripts and LICENSE; manifest bumped.
Workspace & Misc
Cargo.toml, .gitignore, ISSUE_DRAFT.md, apps/web/eslint.config.js
Workspace version bump to 0.1.1-rc.2, ignored deploy-ghcr.sh, added issue template, extended ESLint globs.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client (Web / VSCode)
  participant Server as API Server (axum)
  participant Scanner as Scanner Engine (crates/scanner)
  participant DB as Postgres

  Client->>Server: POST /api/v1/scan {code, language, ai_config?}
  Server->>Server: generate scan_id, create broadcast sender, cache slot
  Server->>Scanner: spawn scan_stream(scan_config, sender)
  Scanner->>Server: -> channel: ScanEvent::Progress / Finding / Complete / Error
  Server->>DB: upsert scan_results and scan summary on events
  Server->>Server: cache events for replay
  Client->>Server: GET /api/v1/scan/:id/events (SSE)
  Server->>Client: stream cached + live ScanEvent SSE messages
  Client->>Client: update UI/diagnostics as events arrive
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Nkwenti-Severian-Ndongtsop

Poem

🐇 Hopping through events one by one,
I stream the findings, blink, and run.
Models fetched, containers spun,
DBs synced and servers begun.
Carrots, code, and jobs — all done!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main objective: completing Phase 2 with API server, dashboard, and CVE sync implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/cve-sync-and-config

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Complete Phase 2 - API server with SSE streaming, CVE sync, and full-stack dashboard

✨ Enhancement

Grey Divider

Walkthroughs

Description
• **API Server Refactor:** Complete rewrite with async SSE streaming for real-time scan progress,
  PostgreSQL persistence, and CVE enrichment from local database
• **CVE Synchronization:** New module syncing vulnerability data from NVD API v2 with CVSS scores
  and descriptions stored in PostgreSQL
• **Event Streaming Architecture:** Added ScanEvent enum and scan_stream() function enabling
  real-time progress updates across all components
• **Dashboard Enhancements:** New pages for scan history, AI settings configuration, and CVE
  database management with real-time progress indicators
• **VS Code Extension:** Implemented sidebar webview provider with SSE streaming integration for
  in-editor scan results and diagnostics
• **CLI Configuration:** Added local config management with ~/.config/zenvra/config.json for
  persistent AI settings
• **Model Discovery:** Unified list_models() interface across all AI providers (Anthropic, OpenAI,
  Google, Custom)
• **Infrastructure:** Added Docker Compose configuration, multi-stage Dockerfiles for API and web
  services, database migrations, and GitHub Actions workflow for container publishing
• **Dependencies:** Updated server dependencies for database support (sqlx, chrono), CLI
  (clap), and streaming (tokio-stream, dashmap)
Diagram
flowchart LR
  NVD["NVD API v2"]
  OSV["OSV Database"]
  CVESync["CVE Sync Module"]
  PG["PostgreSQL"]
  Redis["Redis"]
  APIServer["Axum API Server"]
  SSE["SSE Streaming"]
  Dashboard["SvelteKit Dashboard"]
  VSCode["VS Code Extension"]
  Scanner["Scanner Engine"]
  
  NVD -- "fetch vulnerabilities" --> CVESync
  OSV -- "fetch vulnerabilities" --> CVESync
  CVESync -- "upsert records" --> PG
  Scanner -- "emit events" --> SSE
  APIServer -- "stream events" --> SSE
  SSE -- "real-time updates" --> Dashboard
  SSE -- "real-time updates" --> VSCode
  Dashboard -- "trigger scans" --> APIServer
  VSCode -- "trigger scans" --> APIServer
  APIServer -- "cache results" --> Redis
  APIServer -- "persist metadata" --> PG
  PG -- "CVE enrichment" --> APIServer
Loading

Grey Divider

File Changes

1. crates/server/src/main.rs ✨ Enhancement +306/-16

Complete API server refactor with SSE streaming and persistence

• Refactored API server to support async scan streaming via SSE with real-time progress updates
• Added CLI subcommands for serve and sync modes with database migrations
• Implemented new endpoints: /api/v1/scan/:id/events (SSE), /api/v1/history, /api/v1/sync,
 /api/v1/ai/models
• Added AppState struct with DashMap for managing live scan broadcasts and cached results
• Integrated PostgreSQL persistence for scan metadata and findings with CVE enrichment from local
 database

crates/server/src/main.rs


2. crates/cli/src/main.rs ✨ Enhancement +96/-5

Add local configuration management for CLI settings

• Added Config subcommand with Set and Show actions for managing local configuration
• Implemented ZenvraConfig struct for persistent storage in ~/.config/zenvra/config.json
• Updated AI config resolution to prioritize CLI flags > local config > environment variables
• Changed default Anthropic model from claude-sonnet-4-20250514 to claude-3-5-sonnet-20240620

crates/cli/src/main.rs


3. crates/server/src/cve_sync/mod.rs ✨ Enhancement +133/-0

Implement CVE data synchronization from NVD and OSV

• New module implementing CVE synchronization from NVD API v2 and OSV (pending)
• Parses NVD JSON responses and upserts vulnerability records into PostgreSQL
• Supports optional NVD API key for higher rate limits; graceful fallback to rate-limited mode
• Extracts CVSS v3.1 severity scores and vulnerability descriptions for local enrichment

crates/server/src/cve_sync/mod.rs


View more (31)
4. crates/scanner/src/lib.rs ✨ Enhancement +52/-20

Add event streaming support to scanner core

• Added ScanEvent enum export for real-time scan progress streaming
• Introduced scan_stream() function that emits events via UnboundedSender channel
• Refactored scan() to use scan_stream() internally for backward compatibility
• Events include Progress, Finding, Complete, and Error variants

crates/scanner/src/lib.rs


5. crates/scanner/src/engine.rs ✨ Enhancement +31/-13

Refactor engine orchestrator for streaming progress

• Renamed run() to run_stream() with event channel parameter for progress reporting
• Added Progress events between engine runs showing percentage and status message
• Maintained backward-compatible run() wrapper that discards progress events
• Improved engine orchestration with per-engine progress tracking

crates/scanner/src/engine.rs


6. crates/scanner/src/ai/openai.rs ✨ Enhancement +48/-1

Add model listing for OpenAI-compatible providers

• Added list_models() function to fetch available models from OpenAI-compatible APIs
• Handles both standard OpenAI endpoint and custom /v1 paths
• Parses ModelsResponse and returns sorted list of model IDs

crates/scanner/src/ai/openai.rs


7. crates/scanner/src/ai/anthropic.rs ✨ Enhancement +46/-0

Add model listing for Anthropic API

• Implemented list_models() to fetch Anthropic API models with fallback to static list
• Returns hardcoded model list on client/server errors for graceful degradation
• Includes proper API versioning headers and error handling

crates/scanner/src/ai/anthropic.rs


8. crates/scanner/src/ai/google.rs ✨ Enhancement +44/-0

Add model listing for Google Gemini API

• Added list_models() function for Google Gemini API model discovery
• Strips models/ prefix from returned model names for cleaner display
• Includes error handling and response parsing for Gemini API format

crates/scanner/src/ai/google.rs


9. crates/scanner/src/ai/mod.rs ✨ Enhancement +19/-0

Add unified model listing interface across providers

• Exported new list_models() public function that dispatches to provider-specific implementations
• Supports all four provider kinds: Anthropic, OpenAI, Google, and Custom
• Custom provider delegates to OpenAI-compatible endpoint

crates/scanner/src/ai/mod.rs


10. crates/scanner/src/finding.rs ✨ Enhancement +14/-0

Define ScanEvent enum for streaming scan results

• Added ScanEvent enum with four variants for real-time scan communication
• Uses serde tag/content for clean JSON serialization with type and data fields
• Supports progress updates, finding emissions, completion, and error reporting

crates/scanner/src/finding.rs


11. apps/web/src/routes/scan/+page.svelte ✨ Enhancement +172/-65

Implement SSE-based real-time scan UI with progress tracking

• Refactored scan UI to use SSE streaming instead of direct REST calls
• Added real-time progress indicator with circular and linear progress bars
• Integrated aiConfig store for persistent AI settings across pages
• Enhanced finding display with CVE links, NVD context, and improved code formatting
• Replaced inline AI config inputs with link to Settings page

apps/web/src/routes/scan/+page.svelte


12. apps/web/src/routes/settings/ai/+page.svelte ✨ Enhancement +224/-0

Create AI settings configuration page

• New settings page for configuring AI providers, API keys, and models
• Implements dynamic model fetching with connection verification
• Supports all four provider types with custom endpoint configuration
• Displays active configuration badge and policy information
• Saves configuration to shared aiConfig store and localStorage

apps/web/src/routes/settings/ai/+page.svelte


13. apps/web/src/routes/history/+page.svelte ✨ Enhancement +144/-0

Add scan history page with results overview

• New page displaying scan history with severity distribution visualization
• Shows language, target name, findings count, and creation timestamp
• Includes refresh button and loading/error states
• Displays empty state with link to start new scan

apps/web/src/routes/history/+page.svelte


14. apps/web/src/routes/settings/cve/+page.svelte ✨ Enhancement +97/-0

Create CVE database settings page

• New settings page for CVE database management
• Displays NVD and OSV data feed status cards
• Implements manual sync trigger with loading and status feedback
• Shows sync progress and success/error messages

apps/web/src/routes/settings/cve/+page.svelte


15. apps/web/src/routes/+layout.svelte ✨ Enhancement +3/-3

Enable previously disabled navigation routes

• Enabled navigation links for History, CVE Settings, and AI Settings (previously disabled)
• All three routes now fully functional and accessible from main navigation

apps/web/src/routes/+layout.svelte


16. extensions/vscode/src/sidebarProvider.ts ✨ Enhancement +220/-0

Implement VS Code sidebar webview provider

• New webview provider implementing VS Code sidebar UI for Zenvra scanner
• Displays scan button, real-time progress bar, and findings list
• Handles SSE message streaming and updates sidebar UI reactively
• Includes styling for severity levels and progress visualization

extensions/vscode/src/sidebarProvider.ts


17. extensions/vscode/src/extension.ts ✨ Enhancement +129/-11

Integrate sidebar provider and SSE streaming in VS Code extension

• Refactored to use new SidebarProvider for webview integration
• Updated scanDocument() to call API and stream SSE events
• Implemented updateDiagnostics() to convert findings to VS Code diagnostics
• Added configuration reading for API URL, AI provider, and model settings
• Integrated sidebar messaging for real-time progress and finding updates

extensions/vscode/src/extension.ts


18. apps/web/src/lib/api.ts ✨ Enhancement +48/-1

Add API functions for history, sync, and model listing

• Added ScanHistory interface for scan metadata
• Implemented getHistory() to fetch 50 most recent scans
• Implemented triggerSync() to manually trigger CVE synchronization
• Implemented fetchAiModels() to dynamically list available models per provider
• Fixed BASE_URL to strip trailing slashes

apps/web/src/lib/api.ts


19. apps/web/src/lib/stores/aiConfig.svelte.ts ✨ Enhancement +45/-0

Create reactive AI configuration store

• New singleton store for managing AI configuration state
• Reads from localStorage on initialization for persistence across page reloads
• Provides save() method for atomic updates to both state and storage
• Exports isConfigured derived value for conditional UI rendering

apps/web/src/lib/stores/aiConfig.svelte.ts


20. extensions/vscode/src/types.ts ✨ Enhancement +32/-0

Define TypeScript types for VS Code extension

• New file defining TypeScript interfaces for API communication
• Includes AiConfig, ScanRequest, and Finding types
• Provides type safety for VS Code extension API interactions

extensions/vscode/src/types.ts


21. apps/web/svelte.config.js ⚙️ Configuration changes +1/-1

Switch SvelteKit adapter to Node.js

• Changed adapter from @sveltejs/adapter-auto to @sveltejs/adapter-node
• Enables Node.js server deployment for production environments

apps/web/svelte.config.js


22. extensions/vscode/package.json ⚙️ Configuration changes +58/-7

Update VS Code extension manifest with sidebar and config

• Bumped version from 0.1.1-rc.1 to 0.1.1-rc.2
• Removed onStartupFinished activation event; now lazy-loads on command
• Added sidebar view container and webview configuration
• Added new configuration properties for API URL, AI provider, model, and endpoint
• Categorized commands under "Zenvra" category

extensions/vscode/package.json


23. .github/workflows/release.yml ⚙️ Configuration changes +57/-0

Add Docker image publishing to release workflow

• Added docker-publish job for building and pushing container images
• Publishes both zenvra-api and zenvra-web images to GitHub Container Registry
• Uses Docker metadata action for semantic versioning and latest tags
• Implements layer caching for faster builds

.github/workflows/release.yml


24. docker-compose.yml ⚙️ Configuration changes +32/-1

Add API and web services to Docker Compose

• Changed PostgreSQL port from 5432 to 5433 to avoid conflicts
• Added zenvra-api service with database and Redis dependencies
• Added zenvra-web service with API URL configuration
• Both services use pre-built images from GitHub Container Registry

docker-compose.yml


25. ISSUE_DRAFT.md 📝 Documentation +31/-0

Add issue template for CVE sync feature

• New issue template for CVE synchronization feature request
• Documents problem statement, proposed solution, and implementation details
• References NVD API v2 and OSV as data sources
• Outlines database schema and API integration requirements

ISSUE_DRAFT.md


26. migrations/202604040002_create_scans_table.sql ⚙️ Configuration changes +32/-0

Create database schema for scan history

• New migration creating scans table for scan metadata storage
• Creates scan_results table for individual findings with foreign key to scans
• Includes indexes for fast lookup by scan ID and creation timestamp
• Stores severity counts as JSONB for flexible aggregation

migrations/202604040002_create_scans_table.sql


27. migrations/202604040001_create_vulnerabilities_table.sql ⚙️ Configuration changes +24/-0

Create database schema for vulnerability data

• New migration creating vulnerabilities table for CVE/OSV data
• Enables PostgreSQL pg_trgm extension for full-text search
• Stores CVE ID, severity, title, description, and data source
• Includes indexes for fast lookup by CVE ID, severity, and title search

migrations/202604040001_create_vulnerabilities_table.sql


28. crates/server/Cargo.toml Dependencies +9/-2

Update server dependencies for database and CLI support

• Added dependencies: sqlx, chrono, dotenvy, clap, reqwest, dashmap, tokio-stream
• Downgraded axum from 0.8 to 0.7 and tower-http from 0.6 to 0.5 for compatibility
• Enables SQLx features for PostgreSQL, macros, and UUID support

crates/server/Cargo.toml


29. apps/web/package.json Dependencies +2/-1

Update web package version and add Node adapter

• Bumped version from 0.1.1-rc.1 to 0.1.1-rc.2
• Added @sveltejs/adapter-node as dev dependency for Node.js deployment

apps/web/package.json


30. crates/server/Dockerfile ⚙️ Configuration changes +24/-0

Add Dockerfile for API server containerization

• New multi-stage Dockerfile for building and running Zenvra API server
• Uses cargo-chef for dependency caching in build stage
• Final stage based on Debian slim with minimal runtime dependencies
• Exposes port 8080 and runs zenvra-server binary

crates/server/Dockerfile


31. apps/web/pnpm-lock.yaml Dependencies +158/-0

Add Node.js adapter and Rollup plugin dependencies

• Added @sveltejs/adapter-node v5.5.4 dependency for Node.js runtime support
• Added Rollup plugin dependencies: @rollup/plugin-commonjs, @rollup/plugin-json, and
 @rollup/plugin-node-resolve
• Added utility dependencies: resolve, is-core-module, is-module, is-reference, commondir,
 estree-walker, path-parse, hasown, function-bind, and @types/resolve
• Updated lock file with resolved versions and dependency trees for all new packages

apps/web/pnpm-lock.yaml


32. apps/web/Dockerfile ⚙️ Configuration changes +28/-0

Add Docker containerization for SvelteKit web app

• Created multi-stage Docker build configuration for the SvelteKit web application
• Stage 1 (builder): Installs dependencies and builds the application using pnpm
• Stage 2 (runtime): Copies built artifacts and runs the application on port 3000
• Sets NODE_ENV=production and exposes port 3000 for the web service

apps/web/Dockerfile


33. Cargo.toml ⚙️ Configuration changes +1/-1

Update Rust workspace version to rc.2

• Bumped workspace version from 0.1.1-rc.1 to 0.1.1-rc.2

Cargo.toml


34. extensions/vscode/zenvra-0.1.1.vsix Additional files +0/-0

...

extensions/vscode/zenvra-0.1.1.vsix


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 6, 2026

Code Review by Qodo

🐞 Bugs (4)   📘 Rule violations (6)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (4)
📘\ ⛨ Security (4) ⚙ Maintainability (2)

Grey Divider


Action required

1. DATABASE_URL uses expect()📘
Description
The API server reads DATABASE_URL with .expect(...), which can panic and crash the service in
production. This violates the requirement to avoid unwrap()/expect() in Rust API code and use
structured error handling instead.
Code

crates/server/src/main.rs[68]

+    let db_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
Evidence
Compliance ID 2 forbids .unwrap()/.expect() in Rust library/API code;
crates/server/src/main.rs introduces .expect("DATABASE_URL must be set") in the server startup
path.

AGENTS.md
crates/server/src/main.rs[68-68]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`DATABASE_URL` is read via `.expect(...)` in the API server, which can panic.
## Issue Context
This is production API code; compliance requires structured error propagation instead of panics.
## Fix Focus Areas
- crates/server/src/main.rs[68-72]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. catch (err: any) added📘
Description
New TypeScript/Svelte code introduces any in error handlers, which weakens strict typing and can
mask type errors. This violates the strict-mode requirement of no new any usage.
Code

apps/web/src/routes/history/+page.svelte[R14-17]

+    } catch (err: any) {
+      console.error("Failed to load history", err);
+      error = err.message || "An unexpected error occurred while loading your history.";
+    } finally {
Evidence
Compliance ID 7 forbids introducing any; the PR adds catch (err: any) in multiple places
including the history page and settings pages, and also introduces as any casting in the VS Code
extension.

AGENTS.md
apps/web/src/routes/history/+page.svelte[14-17]
apps/web/src/routes/settings/ai/+page.svelte[37-39]
extensions/vscode/src/extension.ts[98-99]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
TypeScript code introduces `any` (and `as any`) which breaks strict typing.
## Issue Context
Use `unknown` in catches and narrow with `instanceof Error` (or typed error shapes) before reading `.message`.
## Fix Focus Areas
- apps/web/src/routes/history/+page.svelte[14-17]
- apps/web/src/routes/settings/ai/+page.svelte[37-39]
- apps/web/src/routes/settings/cve/+page.svelte[17-20]
- extensions/vscode/src/extension.ts[98-99]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. API key stored in localStorage 📘
Description
The web app persists zenvra_ai_api_key in localStorage, placing a secret in browser-accessible
storage. This violates the rule prohibiting secrets/API keys in client-side code paths.
Code

apps/web/src/lib/stores/aiConfig.svelte.ts[R12-31]

+  let provider = $state(browser ? (localStorage.getItem('zenvra_ai_provider') ?? 'anthropic') : 'anthropic');
+  let model    = $state(browser ? (localStorage.getItem('zenvra_ai_model')    ?? '') : '');
+  let apiKey   = $state(browser ? (localStorage.getItem('zenvra_ai_api_key')  ?? '') : '');
+  let endpoint = $state(browser ? (localStorage.getItem('zenvra_ai_endpoint') ?? '') : '');
+
+  const isConfigured = $derived(!!model && !!apiKey);
+
+  /** Persist changes to both reactive state and localStorage atomically. */
+  function save(p: string, m: string, key: string, ep: string): void {
+    provider = p;
+    model    = m;
+    apiKey   = key;
+    endpoint = ep;
+
+    if (browser) {
+      localStorage.setItem('zenvra_ai_provider', p);
+      localStorage.setItem('zenvra_ai_model',    m);
+      localStorage.setItem('zenvra_ai_api_key',  key);
+      if (ep) localStorage.setItem('zenvra_ai_endpoint', ep);
+      else    localStorage.removeItem('zenvra_ai_endpoint');
Evidence
Compliance ID 10 requires no secrets/API keys in client-side code; the new store reads and writes an
AI API key directly to localStorage (zenvra_ai_api_key).

AGENTS.md
apps/web/src/lib/stores/aiConfig.svelte.ts[12-31]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The frontend stores the user's AI API key in `localStorage`, which is client-side accessible.
## Issue Context
Compliance requires secrets not be present in browser-delivered code or storage.
## Fix Focus Areas
- apps/web/src/lib/stores/aiConfig.svelte.ts[12-31]
- apps/web/src/routes/settings/ai/+page.svelte[6-55]
- apps/web/src/routes/scan/+page.svelte[34-46]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (12)
4. scan/+page.svelte uses raw fetch 📘
Description
The scan page introduces direct fetch(...) and EventSource(...) calls instead of routing
requests through the typed API client in apps/web/src/lib/api.ts. This breaks the frontend
networking standard and bypasses centralized typing/error handling.
Code

apps/web/src/routes/scan/+page.svelte[R34-55]

+      const response = await fetch("http://localhost:8080/api/v1/scan", {
+        method: "POST",
+        headers: { "Content-Type": "application/json" },
+        body: JSON.stringify({
+          code,
+          language: "python",
+          engines: ["sast", "secrets"],
+          ai_config: hasAiConfig ? {
+            provider: aiConfig.provider,
+            api_key:  aiConfig.apiKey,
+            model:    aiConfig.model,
+            endpoint: aiConfig.endpoint || undefined,
+          } : undefined
+        })
});
+
+      if (!response.ok) throw new Error("Failed to start scan");
+      const { scan_id } = await response.json();
+
+      // Step 2: Subscribe to the SSE stream
+      const eventSource = new EventSource(`http://localhost:8080/api/v1/scan/${scan_id}/events`);
+
Evidence
Compliance ID 11 requires frontend requests to use the typed API client; the scan page directly
calls fetch("http://localhost:8080/api/v1/scan") and creates an EventSource to a hardcoded URL.

AGENTS.md
apps/web/src/routes/scan/+page.svelte[34-55]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The scan page uses ad-hoc `fetch` and hardcoded URLs instead of the typed API client.
## Issue Context
Typed API client should centralize base URL handling, typing, and consistent errors.
## Fix Focus Areas
- apps/web/src/routes/scan/+page.svelte[34-55]
- apps/web/src/lib/api.ts[38-96]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Forms lack Zod validation 📘
Description
The new AI settings form performs only manual string checks and does not use zod schemas for input
validation. This violates the requirement for schema-based validation at form/API trust boundaries.
Code

apps/web/src/routes/settings/ai/+page.svelte[R44-55]

+  const handleSave = () => {
+    if (!selectedModel.trim()) { error = "Model name is required."; return; }
+    if (!apiKey.trim())        { error = "API key is required.";    return; }
+    error = null;
+
+    // Persist via the store — writes localStorage atomically
+    aiConfig.save(provider, selectedModel.trim(), apiKey.trim(), endpoint.trim());
+
+    saveSuccess = true;
+    clearTimeout(successTimer);
+    successTimer = setTimeout(() => { saveSuccess = false; }, 3500);
+  };
Evidence
Compliance ID 13 requires Zod validation; the form handler validates via ad-hoc trim() checks and
then persists values without any zod schema parsing.

AGENTS.md
apps/web/src/routes/settings/ai/+page.svelte[44-55]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
AI settings inputs are not validated with Zod; only ad-hoc checks are used.
## Issue Context
Zod schemas should validate provider/model/key/endpoint (including URL validation for custom endpoints) before saving.
## Fix Focus Areas
- apps/web/src/routes/settings/ai/+page.svelte[44-55]
- apps/web/src/lib/stores/aiConfig.svelte.ts[19-33]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. list_models() calls AI APIs 📘
Description
The scanner introduces provider API calls to list models, which is outside the allowed AI usage
scope (explanations and fix generation only). This expands cost/surface area beyond the product
policy.
Code

crates/scanner/src/ai/mod.rs[R123-140]

+/// List available models for a given provider and API key.
+///
+/// This provides the "sophisticated" dynamic loading requested by the user.
+pub async fn list_models(
+    provider: ProviderKind,
+    api_key: &str,
+    endpoint: Option<&str>,
+) -> Result<Vec<String>> {
+    match provider {
+        ProviderKind::Anthropic => anthropic::list_models(api_key, endpoint).await,
+        ProviderKind::OpenAi => openai::list_models(api_key, endpoint).await,
+        ProviderKind::Google => google::list_models(api_key, endpoint).await,
+        ProviderKind::Custom => {
+            let ep = endpoint.ok_or_else(|| anyhow::anyhow!("Custom provider requires an endpoint"))?;
+            openai::list_models(api_key, Some(ep)).await
+        }
+    }
+}
Evidence
Compliance ID 17 restricts AI provider calls to explanation/fix generation; the new list_models
API triggers outbound calls to Anthropic/OpenAI/Google model listing endpoints.

AGENTS.md
crates/scanner/src/ai/mod.rs[123-140]
crates/scanner/src/ai/openai.rs[73-101]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
AI provider API calls are added for model listing, which is not allowed by policy.
## Issue Context
Only explanation and fix generation calls are permitted.
## Fix Focus Areas
- crates/scanner/src/ai/mod.rs[123-140]
- crates/scanner/src/ai/openai.rs[73-101]
- crates/scanner/src/ai/google.rs[84-116]
- crates/scanner/src/ai/anthropic.rs[64-98]
- crates/server/src/main.rs[343-375]
- apps/web/src/lib/api.ts[78-96]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. vulnerable_code persisted to DB 📘
Description
The server persists finding.vulnerable_code (raw submitted code snippets) to PostgreSQL in
scan_results, which increases retention of submitted source beyond what is necessary. This
violates the requirement to avoid storing raw submitted code longer than needed.
Code

crates/server/src/main.rs[R201-218]

+                    // Persist individual finding
+                    let _ = sqlx::query(
+                        "INSERT INTO scan_results (scan_id, engine, cve_id, cwe_id, severity, title, description, vulnerable_code, fixed_code, line_start, line_end, file_path)
+                         VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)"
+                    )
+                    .bind(scan_id)
+                    .bind(format!("{:?}", finding.engine))
+                    .bind(&finding.cve_id)
+                    .bind(&finding.cwe_id)
+                    .bind(finding.severity.to_string())
+                    .bind(&finding.title)
+                    .bind(&finding.description)
+                    .bind(&finding.vulnerable_code)
+                    .bind(&finding.fixed_code)
+                    .bind(finding.line_start as i32)
+                    .bind(finding.line_end as i32)
+                    .bind(&finding.file_path)
+                    .execute(&state_task.db)
Evidence
Compliance ID 18 prohibits long-term persistence of raw submitted code; the PR adds a
scan_results.vulnerable_code column and inserts finding.vulnerable_code into it during scan
processing.

AGENTS.md
crates/server/src/main.rs[201-218]
migrations/202604040002_create_scans_table.sql[11-27]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The server stores `vulnerable_code` (and `fixed_code`) in the database, which can retain sensitive submitted source.
## Issue Context
Persist only minimal scan results/metadata; avoid storing source snippets, or store only redacted/hashed excerpts with strict retention.
## Fix Focus Areas
- crates/server/src/main.rs[201-218]
- migrations/202604040002_create_scans_table.sql[11-27]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Hardcoded scan API URL 🐞
Description
The web Scan page posts to and opens SSE streams using hardcoded http://localhost:8080, bypassing
PUBLIC_API_URL and breaking non-local deployments (Docker/K8s/prod domains).
Code

apps/web/src/routes/scan/+page.svelte[R34-55]

+      const response = await fetch("http://localhost:8080/api/v1/scan", {
+        method: "POST",
+        headers: { "Content-Type": "application/json" },
+        body: JSON.stringify({
+          code,
+          language: "python",
+          engines: ["sast", "secrets"],
+          ai_config: hasAiConfig ? {
+            provider: aiConfig.provider,
+            api_key:  aiConfig.apiKey,
+            model:    aiConfig.model,
+            endpoint: aiConfig.endpoint || undefined,
+          } : undefined
+        })
});
+
+      if (!response.ok) throw new Error("Failed to start scan");
+      const { scan_id } = await response.json();
+
+      // Step 2: Subscribe to the SSE stream
+      const eventSource = new EventSource(`http://localhost:8080/api/v1/scan/${scan_id}/events`);
+
Evidence
+page.svelte hardcodes localhost for both the scan POST and SSE subscribe, while the shared API
client already derives BASE_URL from import.meta.env.PUBLIC_API_URL for correct
environment-based routing.

apps/web/src/routes/scan/+page.svelte[34-55]
apps/web/src/lib/api.ts[6-8]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The scan UI bypasses environment configuration by hardcoding `http://localhost:8080` for both starting scans and subscribing to SSE events.
### Issue Context
The repo already defines an environment-driven `BASE_URL` in `$lib/api.ts`.
### Fix Focus Areas
- apps/web/src/routes/scan/+page.svelte[34-55]
- apps/web/src/lib/api.ts[6-8]
### Implementation notes
- Replace hardcoded URLs with the same `BASE_URL` approach used in `$lib/api.ts` (either import `BASE_URL`/helper functions or route all calls through `$lib/api`).
- Ensure SSE URL is built from the configured base URL (and is normalized to avoid double slashes).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Undefined AI badge variables 🐞
Description
The Scan page references {provider} and {model} in the template but these variables are not
declared in the script block, causing a Svelte compile-time error.
Code

apps/web/src/routes/scan/+page.svelte[R113-123]

+      {#if hasAiConfig}
+        <a 
+          href="/settings/ai"
+          class="flex items-center gap-2.5 px-4 py-2 glass rounded-xl border-zinc-800 hover:border-zinc-700 transition-all group"
+          title="AI config loaded from Settings — click to change"
+        >
+          <div class="w-1.5 h-1.5 rounded-full bg-emerald-400 shadow-[0_0_6px_rgba(52,211,153,0.7)]"></div>
+          <span class="text-xs font-bold text-zinc-400 group-hover:text-zinc-200 transition-colors">{provider}</span>
+          <span class="text-zinc-700">/</span>
+          <span class="text-xs font-mono text-zinc-500 group-hover:text-zinc-300 transition-colors max-w-[180px] truncate">{model}</span>
+          <svg class="w-3 h-3 text-zinc-600 group-hover:text-zinc-400 transition-colors ml-1" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.5"><path d="M11 4H4a2 2 0 0 0-2 2v14a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2v-7"/><path d="M18.5 2.5a2.121 2.121 0 0 1 3 3L12 15l-4 1 1-4Z"/></svg>
Evidence
The template renders {provider} and {model} but the module only defines hasAiConfig derived
from aiConfig.isConfigured; there are no provider or model bindings declared anywhere in the
script block.

apps/web/src/routes/scan/+page.svelte[1-24]
apps/web/src/routes/scan/+page.svelte[113-123]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The scan page UI uses `{provider}` and `{model}` identifiers that are not declared, which will fail the Svelte build.
### Issue Context
The intended values are available via the shared `aiConfig` store.
### Fix Focus Areas
- apps/web/src/routes/scan/+page.svelte[1-24]
- apps/web/src/routes/scan/+page.svelte[113-123]
### Implementation notes
- Replace `{provider}` with `{aiConfig.provider}` and `{model}` with `{aiConfig.model}` (or declare `const provider = $derived(aiConfig.provider)` / `const model = $derived(aiConfig.model)` and keep template unchanged).
- Remove unused imports (e.g., `onMount`, `scan`) if no longer used to keep the module clean.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


10. Web compose build fails 🐞
Description
docker-compose.yml builds zenvra-web with context . but apps/web/Dockerfile expects
package.json at the context root, so the Docker build will fail to find it.
Code

docker-compose.yml[R47-52]

+  zenvra-web:
+    image: ghcr.io/cameroon-developer-network/zenvra-web:v0.1.1-rc.1
+    build:
+      context: .
+      dockerfile: apps/web/Dockerfile
+    ports:
Evidence
Compose sets build context to repository root, but the Dockerfile copies package.json relative to
the build context root; the web app package.json is under apps/web/, not at repo root.

docker-compose.yml[47-52]
apps/web/Dockerfile[6-13]
apps/web/package.json[1-4]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`docker-compose` builds the web image from the wrong context, causing `COPY package.json ...` to fail.
### Issue Context
The web Dockerfile assumes the build context is `apps/web`.
### Fix Focus Areas
- docker-compose.yml[47-52]
- apps/web/Dockerfile[6-13]
### Implementation notes
- Change `zenvra-web.build.context` to `./apps/web`.
- Optionally also change `zenvra-web.build.dockerfile` to `Dockerfile` (since the context becomes `apps/web`) to reduce confusion.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


11. Missing pgcrypto extension🐞
Description
Migrations use gen_random_uuid() defaults but do not enable pgcrypto, so table creation can fail
with "function gen_random_uuid() does not exist".
Code

migrations/202604040001_create_vulnerabilities_table.sql[R1-7]

+-- Enable the trgm extension for fast text search
+CREATE EXTENSION IF NOT EXISTS pg_trgm;
+
+-- Create vulnerabilities table for storing CVE and OSV data
+CREATE TABLE IF NOT EXISTS vulnerabilities (
+    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+    cve_id VARCHAR(50) UNIQUE,
Evidence
Both migrations define id UUID ... DEFAULT gen_random_uuid() but only enable pg_trgm;
gen_random_uuid() is provided by the pgcrypto extension.

migrations/202604040001_create_vulnerabilities_table.sql[1-7]
migrations/202604040002_create_scans_table.sql[1-16]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Database migrations use `gen_random_uuid()` but do not enable the extension that provides it.
### Issue Context
Local Postgres images commonly do not have `pgcrypto` enabled by default.
### Fix Focus Areas
- migrations/202604040001_create_vulnerabilities_table.sql[1-7]
- migrations/202604040002_create_scans_table.sql[1-16]
### Implementation notes
- Add `CREATE EXTENSION IF NOT EXISTS pgcrypto;` before any `gen_random_uuid()` usage (ideally in the first migration).
- Alternatively replace with `uuid_generate_v4()` and enable `uuid-ossp`, but keep it consistent across all tables.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


12. Extension never activates🐞
Description
The VS Code extension sets activationEvents to an empty array, so the extension will not activate
and none of the commands/sidebar will work.
Code

extensions/vscode/package.json[R26-29]

"icon": "assets/icon.png",
-  "activationEvents": [
-    "onStartupFinished"
-  ],
+  "activationEvents": [],
"main": "./out/extension.js",
"contributes": {
Evidence
With no activation events, VS Code will never load the extension entrypoint (main) in normal use,
so command registrations in activate() are never executed.

extensions/vscode/package.json[26-29]
extensions/vscode/src/extension.ts[9-23]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The extension will never activate due to an empty `activationEvents` list.
### Issue Context
Commands and the sidebar provider are registered only during `activate()`.
### Fix Focus Areas
- extensions/vscode/package.json[26-29]
- extensions/vscode/src/extension.ts[9-23]
### Implementation notes
- Add activation events such as:
- `onStartupFinished`
- `onView:zenvraMain`
- `onCommand:zenvra.scanFile`, `onCommand:zenvra.scanWorkspace`, `onCommand:zenvra.setApiToken`
- Prefer command-based activation to minimize startup overhead.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


13. Broadcasts pre-enrichment findings 🐞
Description
The server broadcasts ScanEvent::Finding to SSE subscribers before enriching the finding from the
vulnerabilities DB, so clients and cached replays never see the enriched title/description.
Code

crates/server/src/main.rs[R176-199]

+        while let Some(event) = scan_rx.recv().await {
+            // Cache event for late subscribers
+            all_events.push(event.clone());
+
+            // Broadcast to any connected SSE subscribers
+            let _ = tx.send(event.clone());
+
+            // Process specific events for DB persistence
+            match event {
+                ScanEvent::Finding(mut finding) => {
+                    let sev_str = finding.severity.to_string().to_lowercase();
+                    *severity_counts.entry(sev_str).or_insert(0) += 1;
+
+                    // Enrich from local DB
+                    if let Some(cve_id) = &finding.cve_id {
+                        if let Ok(Some(row)) = sqlx::query("SELECT title, description FROM vulnerabilities WHERE cve_id = $1")
+                            .bind(cve_id)
+                            .fetch_optional(&state_task.db)
+                            .await 
+                        {
+                            use sqlx::Row;
+                            finding.title = row.get("title");
+                        }
+                    }
Evidence
The scan loop sends tx.send(event.clone()) before the match that enriches finding.title from
the database, and the broadcast uses the unmodified event value; the enriched finding is only
stored/persisted later.

crates/server/src/main.rs[176-199]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
SSE consumers receive stale/un-enriched findings because the broadcast happens before DB enrichment.
### Issue Context
The server enriches the finding title (and selects description) from `vulnerabilities`, but doesn’t broadcast the updated value.
### Fix Focus Areas
- crates/server/src/main.rs[176-222]
### Implementation notes
- Move broadcasting to after enrichment for `ScanEvent::Finding`, e.g.:
- On `Finding(f)`: enrich `f`, then send `ScanEvent::Finding(f.clone())`.
- For non-finding events, keep current broadcast behavior.
- If you want cached replay to reflect enrichment, cache the enriched event, not the original.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


14. DB write errors ignored🐞
Description
The server ignores SQL errors when inserting scan_results and scans, causing silent data loss
and incorrect history counts if the DB write fails.
Code

crates/server/src/main.rs[R201-236]

+                    // Persist individual finding
+                    let _ = sqlx::query(
+                        "INSERT INTO scan_results (scan_id, engine, cve_id, cwe_id, severity, title, description, vulnerable_code, fixed_code, line_start, line_end, file_path)
+                         VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)"
+                    )
+                    .bind(scan_id)
+                    .bind(format!("{:?}", finding.engine))
+                    .bind(&finding.cve_id)
+                    .bind(&finding.cwe_id)
+                    .bind(finding.severity.to_string())
+                    .bind(&finding.title)
+                    .bind(&finding.description)
+                    .bind(&finding.vulnerable_code)
+                    .bind(&finding.fixed_code)
+                    .bind(finding.line_start as i32)
+                    .bind(finding.line_end as i32)
+                    .bind(&finding.file_path)
+                    .execute(&state_task.db)
+                    .await;
+                    
+                    findings.push(finding);
+                }
+                ScanEvent::Complete => {
+                    // Finalize scan record
+                    let _ = sqlx::query(
+                        "INSERT INTO scans (id, language, target_name, findings_count, severity_counts) 
+                         VALUES ($1, $2, $3, $4, $5) 
+                         ON CONFLICT (id) DO UPDATE SET findings_count = $4, severity_counts = $5"
+                    )
+                    .bind(scan_id)
+                    .bind(payload_lang)
+                    .bind("Manual Scan")
+                    .bind(findings.len() as i32)
+                    .bind(serde_json::to_value(&severity_counts).unwrap_or_default())
+                    .execute(&state_task.db)
+                    .await;
Evidence
Database writes are executed and awaited, but their Result is discarded (let _ = ...await), so
failures are neither logged nor propagated.

crates/server/src/main.rs[201-236]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
DB insert/update failures are suppressed, leading to silent loss of scan history and scan results.
### Issue Context
These queries are the only persistence path for `/api/v1/history`.
### Fix Focus Areas
- crates/server/src/main.rs[201-236]
### Implementation notes
- Replace `let _ = ...await;` with explicit error handling:
- `if let Err(e) = ... { tracing::error!(...); /* optional: notify SSE */ }`
- Consider using a transaction for per-scan persistence so `scans` and `scan_results` are consistent.
- If persistence fails, send a `ScanEvent::Error` so clients aren’t misled.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


15. CLI prints secret value🐞
Description
zenvra-cli config set prints the configured value to stdout, which can leak API keys into terminal
logs and shell history.
Code

crates/cli/src/main.rs[R166-176]

+        ConfigAction::Set { key, value } => {
+            match key.to_lowercase().as_str() {
+                "ai_provider" => config.ai_provider = Some(value.clone()),
+                "ai_key" | "ai_api_key" => config.ai_api_key = Some(value.clone()),
+                "ai_model" => config.ai_model = Some(value.clone()),
+                "ai_endpoint" => config.ai_endpoint = Some(value.clone()),
+                _ => anyhow::bail!("Unknown config key: {}. Valid: ai_provider, ai_key, ai_model, ai_endpoint", key),
+            }
+            config.save()?;
+            println!("✅ Config updated: {} set to {}", key, value);
+        }
Evidence
The config command prints key and value after saving; when setting ai_key/ai_api_key, value
is a secret and is echoed verbatim.

crates/cli/src/main.rs[166-176]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The CLI prints secrets back to the console after configuration updates.
### Issue Context
Users will often paste API keys into `config set`, and terminals/shells commonly persist output.
### Fix Focus Areas
- crates/cli/src/main.rs[166-176]
### Implementation notes
- Print only the key name (and maybe a masked value) for secret keys, e.g. `ai_key` -> `****last4`.
- Alternatively, always print a generic success message: `Config updated: {key}`.
- Consider warning users when writing secrets to disk and/or setting restrictive file permissions on Unix.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

16. settings/ai exceeds 200 lines 📘
Description
The new AI settings Svelte component is 224 lines long, exceeding the 200-line limit. This increases
complexity and makes the UI harder to maintain.
Code

apps/web/src/routes/settings/ai/+page.svelte[R1-224]

+<script lang="ts">
+  import { fetchAiModels } from "$lib/api";
+  import { aiConfig } from "$lib/stores/aiConfig.svelte";
+
+  // Local editing state — pre-populated from the shared store
+  let provider       = $state(aiConfig.provider);
+  let selectedModel  = $state(aiConfig.model);
+  let apiKey         = $state(aiConfig.apiKey);
+  let endpoint       = $state(aiConfig.endpoint);
+
+  let availableModels = $state<string[]>(aiConfig.model ? [aiConfig.model] : []);
+  let isLoadingModels = $state(false);
+  let error           = $state<string | null>(null);
+  let saveSuccess     = $state(false);
+  let successTimer: ReturnType<typeof setTimeout>;
+
+  // Reflects what's currently committed in the store
+  let savedConfig = $derived(
+    aiConfig.isConfigured ? { provider: aiConfig.provider, model: aiConfig.model } : null
+  );
+
+  const providers = [
+    { id: "anthropic", name: "Anthropic",      color: "bg-[#7c3aed]", icon: "A" },
+    { id: "openai",    name: "OpenAI",          color: "bg-[#10a37f]", icon: "O" },
+    { id: "google",    name: "Google Gemini",   color: "bg-[#4285f4]", icon: "G" },
+    { id: "custom",    name: "Custom Provider", color: "bg-zinc-600",  icon: "C" }
+  ];
+
+  const fetchModels = async () => {
+    if (!apiKey) { error = "Please provide an API key first."; return; }
+    isLoadingModels = true;
+    error = null;
+    availableModels = [];
+    try {
+      availableModels = await fetchAiModels(provider, apiKey, endpoint);
+      if (availableModels.length > 0) selectedModel = availableModels[0];
+    } catch (err: any) {
+      error = err.message || "Failed to fetch models. Check your API key and connection.";
+    } finally {
+      isLoadingModels = false;
+    }
+  };
+
+  const handleSave = () => {
+    if (!selectedModel.trim()) { error = "Model name is required."; return; }
+    if (!apiKey.trim())        { error = "API key is required.";    return; }
+    error = null;
+
+    // Persist via the store — writes localStorage atomically
+    aiConfig.save(provider, selectedModel.trim(), apiKey.trim(), endpoint.trim());
+
+    saveSuccess = true;
+    clearTimeout(successTimer);
+    successTimer = setTimeout(() => { saveSuccess = false; }, 3500);
+  };
+</script>
+
+<div class="max-w-4xl mx-auto space-y-8 animate-in fade-in duration-500">
+  <div class="flex items-end justify-between">
+    <div>
+      <h1 class="text-3xl font-bold font-outfit mb-2 tracking-tight">AI Settings</h1>
+      <p class="text-zinc-500 text-sm">Configure the intelligence engine for vulnerability explanations and fix suggestions.</p>
+    </div>
+    <!-- Active config badge -->
+    {#if savedConfig}
+      <div class="flex items-center gap-2 px-4 py-2 rounded-full bg-emerald-500/10 border border-emerald-500/20 text-emerald-400 text-xs font-bold">
+        <div class="w-1.5 h-1.5 rounded-full bg-emerald-400 shadow-[0_0_6px_rgba(52,211,153,0.6)]"></div>
+        Active: {savedConfig.model}
+      </div>
+    {/if}
+  </div>
+
+  <!-- Success Banner -->
+  {#if saveSuccess}
+    <div class="flex items-center gap-3 p-4 rounded-2xl bg-emerald-500/10 border border-emerald-500/20 text-emerald-400 text-sm font-medium animate-in slide-in-from-top-2 duration-300">
+      <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round"><path d="M20 6 9 17l-5-5"/></svg>
+      Configuration saved. Zenvra will now use <strong class="text-emerald-300 ml-1">{selectedModel}</strong>.
+    </div>
+  {/if}
+
+  <div class="grid grid-cols-1 lg:grid-cols-3 gap-8">
+    <div class="lg:col-span-2 space-y-6">
+      <div class="glass p-8 rounded-3xl border-zinc-800 space-y-8 relative overflow-hidden">
+        <!-- Provider Selection -->
+        <div class="space-y-4">
+          <label class="block">
+            <span class="text-[10px] font-black text-zinc-500 uppercase tracking-[0.2em] mb-4 block">1. Choose Intelligence Provider</span>
+            <div class="grid grid-cols-2 sm:grid-cols-4 gap-3">
+              {#each providers as p}
+                <button
+                  onclick={() => { provider = p.id; availableModels = []; selectedModel = ""; error = null; }}
+                  class="p-4 rounded-2xl border transition-all flex flex-col gap-3 items-center text-center {provider === p.id ? 'border-brand-primary bg-brand-primary/5 ring-1 ring-brand-primary/20' : 'border-zinc-800 bg-zinc-900/50 hover:bg-zinc-800 hover:border-zinc-700'}"
+                >
+                  <div class="w-8 h-8 rounded-xl {p.color} flex items-center justify-center font-bold text-white shadow-lg">{p.icon}</div>
+                  <span class="text-xs font-bold tracking-tight {provider === p.id ? 'text-white' : 'text-zinc-400'}">{p.name}</span>
+                </button>
+              {/each}
+            </div>
+          </label>
+        </div>
+
+        <!-- Connection Details -->
+        <div class="space-y-6 pt-4 border-t border-zinc-800/50">
+          <span class="text-[10px] font-black text-zinc-500 uppercase tracking-[0.2em] mb-2 block">2. Authentication & Endpoint</span>
+          <div class="space-y-4">
+            <label class="block">
+              <span class="text-xs font-bold text-zinc-400 mb-2 block">API Key</span>
+              <input
+                type="password"
+                bind:value={apiKey}
+                class="w-full glass bg-zinc-900/50 px-4 py-3 rounded-xl border-zinc-800 text-sm font-medium focus:ring-2 ring-brand-primary outline-none transition-all"
+                placeholder="Paste your {provider} secret key..."
+              />
+            </label>
+
+            <label class="block">
+              <span class="text-xs font-bold text-zinc-400 mb-2 block">Model Name <span class="text-zinc-600 font-normal">(type directly or fetch below)</span></span>
+              <input
+                bind:value={selectedModel}
+                class="w-full glass bg-zinc-900/50 px-4 py-3 rounded-xl border-zinc-800 text-sm font-medium font-mono focus:ring-2 ring-brand-primary outline-none transition-all"
+                placeholder="e.g. claude-sonnet-4-20250514, gpt-4o, gemini-2.0-flash..."
+              />
+            </label>
+            {#if provider === 'custom'}
+              <label class="block animate-in slide-in-from-top-2">
+                <span class="text-xs font-bold text-zinc-400 mb-2 block">Base Endpoint URL</span>
+                <input
+                  bind:value={endpoint}
+                  class="w-full glass bg-zinc-900/50 px-4 py-3 rounded-xl border-zinc-800 text-sm font-medium focus:ring-2 ring-brand-primary outline-none"
+                  placeholder="e.g., https://api.groq.com/openai/v1"
+                />
+              </label>
+            {/if}
+
+            <button
+              onclick={fetchModels}
+              disabled={isLoadingModels || !apiKey}
+              class="w-full py-4 rounded-xl border border-zinc-800 bg-white/5 hover:bg-white/10 text-xs font-bold uppercase tracking-widest transition-all disabled:opacity-50 flex items-center justify-center gap-2"
+            >
+              {#if isLoadingModels}
+                <svg class="animate-spin h-4 w-4" viewBox="0 0 24 24"><circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4" fill="none"></circle><path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path></svg>
+                Verifying Connection...
+              {:else}
+                <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round"><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"/><path d="M3 3v5h5"/><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"/><path d="M21 21v-5h-5"/></svg>
+                Fetch Available Models
+              {/if}
+            </button>
+          </div>
+        </div>
+
+        <!-- Model Selection -->
+        {#if availableModels.length > 0}
+          <div class="space-y-4 pt-6 border-t border-zinc-800/50 animate-in fade-in slide-in-from-bottom-4 duration-500">
+            <span class="text-[10px] font-black text-zinc-500 uppercase tracking-[0.2em] mb-2 block">3. Select Authorized Model</span>
+            <div class="grid grid-cols-1 gap-2">
+              {#each availableModels as m}
+                <button
+                  onclick={() => selectedModel = m}
+                  class="w-full p-4 rounded-xl border text-left transition-all flex items-center justify-between {selectedModel === m ? 'border-brand-primary bg-brand-primary/10' : 'border-zinc-800 bg-zinc-900/30 hover:border-zinc-700'}"
+                >
+                  <span class="text-xs font-bold {selectedModel === m ? 'text-white' : 'text-zinc-500'}">{m}</span>
+                  {#if selectedModel === m}
+                    <div class="w-1.5 h-1.5 rounded-full bg-brand-primary shadow-[0_0_8px_rgba(124,58,237,0.5)]"></div>
+                  {/if}
+                </button>
+              {/each}
+            </div>
+          </div>
+        {/if}
+
+        {#if error}
+          <div class="p-4 rounded-xl bg-red-500/10 border border-red-500/20 text-red-400 text-xs font-medium flex items-center gap-2 animate-in fade-in">
+            <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.5" stroke-linecap="round" stroke-linejoin="round"><circle cx="12" cy="12" r="10"/><line x1="12" x2="12" y1="8" y2="12"/><line x1="12" x2="12.01" y1="16" y2="16"/></svg>
+            {error}
+          </div>
+        {/if}
+
+        <div class="pt-4 flex justify-end">
+          <button
+            onclick={handleSave}
+            disabled={!selectedModel.trim() || !apiKey.trim()}
+            class="btn-primary px-12 py-4 disabled:opacity-30 disabled:cursor-not-allowed shadow-xl shadow-brand-primary/10 transition-all"
+          >
+            Save Configuration
+          </button>
+        </div>
+      </div>
+    </div>
+
+    <!-- Info Panel -->
+    <div class="space-y-6">
+      <div class="glass p-8 rounded-3xl border-zinc-800 bg-brand-primary/5 relative overflow-hidden">
+        <h4 class="text-sm font-bold mb-4 flex items-center gap-2">
+          <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="text-brand-primary"><path d="M12 22s8-4 8-10V5l-8-3-8 3v7c0 6 8 10 8 10"/><path d="m9 12 2 2 4-4"/></svg>
+          Bring Your Own Key
+        </h4>
+        <p class="text-xs text-zinc-500 leading-relaxed">
+          Zenvra is provider-agnostic. Your keys are used ONLY to generate reports and are never stored on our servers. You get direct market rates with zero markups.
+        </p>
+      </div>
+
+      <div class="glass p-8 rounded-3xl border-zinc-800 space-y-4">
+        <h4 class="text-xs font-bold text-zinc-400 uppercase tracking-widest">Active Policies</h4>
+        <div class="space-y-3">
+          <div class="flex items-center gap-2 text-xs font-medium text-emerald-500">
+            <div class="w-1 h-1 rounded-full bg-emerald-500 shadow-[0_0_8px_rgba(16,185,129,0.5)]"></div>
+            Zero Data Training
+          </div>
+          <div class="flex items-center gap-2 text-xs font-medium text-zinc-400">
+            <div class="w-1.5 h-1.5 rounded-full bg-zinc-700 flex items-center justify-center">
+               <div class="w-0.5 h-0.5 rounded-full bg-zinc-500"></div>
+            </div>
+            Encrypted Local Storage
+          </div>
+          <div class="flex items-center gap-2 text-xs font-medium text-zinc-500">
+            <div class="w-1 h-1 rounded-full bg-zinc-500"></div>
+            Rate Limit (10/min)
+          </div>
+        </div>
+      </div>
+    </div>
+  </div>
+</div>
+
Evidence
Compliance ID 12 requires Svelte components be 200 lines or fewer; this new file ends at line 224 in
the diff.

AGENTS.md
apps/web/src/routes/settings/ai/+page.svelte[1-224]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`apps/web/src/routes/settings/ai/+page.svelte` exceeds the 200-line component limit.
## Issue Context
Move UI sections (provider picker, credentials form, model list, info panel) into smaller components under `apps/web/src/lib/components/`.
## Fix Focus Areas
- apps/web/src/routes/settings/ai/+page.svelte[1-224]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +12 to +31
let provider = $state(browser ? (localStorage.getItem('zenvra_ai_provider') ?? 'anthropic') : 'anthropic');
let model = $state(browser ? (localStorage.getItem('zenvra_ai_model') ?? '') : '');
let apiKey = $state(browser ? (localStorage.getItem('zenvra_ai_api_key') ?? '') : '');
let endpoint = $state(browser ? (localStorage.getItem('zenvra_ai_endpoint') ?? '') : '');

const isConfigured = $derived(!!model && !!apiKey);

/** Persist changes to both reactive state and localStorage atomically. */
function save(p: string, m: string, key: string, ep: string): void {
provider = p;
model = m;
apiKey = key;
endpoint = ep;

if (browser) {
localStorage.setItem('zenvra_ai_provider', p);
localStorage.setItem('zenvra_ai_model', m);
localStorage.setItem('zenvra_ai_api_key', key);
if (ep) localStorage.setItem('zenvra_ai_endpoint', ep);
else localStorage.removeItem('zenvra_ai_endpoint');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Api key stored in localstorage 📘 Rule violation ⛨ Security

The web app persists zenvra_ai_api_key in localStorage, placing a secret in browser-accessible
storage. This violates the rule prohibiting secrets/API keys in client-side code paths.
Agent Prompt
## Issue description
The frontend stores the user's AI API key in `localStorage`, which is client-side accessible.

## Issue Context
Compliance requires secrets not be present in browser-delivered code or storage.

## Fix Focus Areas
- apps/web/src/lib/stores/aiConfig.svelte.ts[12-31]
- apps/web/src/routes/settings/ai/+page.svelte[6-55]
- apps/web/src/routes/scan/+page.svelte[34-46]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +34 to +55
const response = await fetch("http://localhost:8080/api/v1/scan", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
code,
language: "python",
engines: ["sast", "secrets"],
ai_config: hasAiConfig ? {
provider: aiConfig.provider,
api_key: aiConfig.apiKey,
model: aiConfig.model,
endpoint: aiConfig.endpoint || undefined,
} : undefined
})
});

if (!response.ok) throw new Error("Failed to start scan");
const { scan_id } = await response.json();

// Step 2: Subscribe to the SSE stream
const eventSource = new EventSource(`http://localhost:8080/api/v1/scan/${scan_id}/events`);

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. scan/+page.svelte uses raw fetch 📘 Rule violation ⚙ Maintainability

The scan page introduces direct fetch(...) and EventSource(...) calls instead of routing
requests through the typed API client in apps/web/src/lib/api.ts. This breaks the frontend
networking standard and bypasses centralized typing/error handling.
Agent Prompt
## Issue description
The scan page uses ad-hoc `fetch` and hardcoded URLs instead of the typed API client.

## Issue Context
Typed API client should centralize base URL handling, typing, and consistent errors.

## Fix Focus Areas
- apps/web/src/routes/scan/+page.svelte[34-55]
- apps/web/src/lib/api.ts[38-96]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +44 to +55
const handleSave = () => {
if (!selectedModel.trim()) { error = "Model name is required."; return; }
if (!apiKey.trim()) { error = "API key is required."; return; }
error = null;

// Persist via the store — writes localStorage atomically
aiConfig.save(provider, selectedModel.trim(), apiKey.trim(), endpoint.trim());

saveSuccess = true;
clearTimeout(successTimer);
successTimer = setTimeout(() => { saveSuccess = false; }, 3500);
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Forms lack zod validation 📘 Rule violation ⛨ Security

The new AI settings form performs only manual string checks and does not use zod schemas for input
validation. This violates the requirement for schema-based validation at form/API trust boundaries.
Agent Prompt
## Issue description
AI settings inputs are not validated with Zod; only ad-hoc checks are used.

## Issue Context
Zod schemas should validate provider/model/key/endpoint (including URL validation for custom endpoints) before saving.

## Fix Focus Areas
- apps/web/src/routes/settings/ai/+page.svelte[44-55]
- apps/web/src/lib/stores/aiConfig.svelte.ts[19-33]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +176 to +199
while let Some(event) = scan_rx.recv().await {
// Cache event for late subscribers
all_events.push(event.clone());

// Broadcast to any connected SSE subscribers
let _ = tx.send(event.clone());

// Process specific events for DB persistence
match event {
ScanEvent::Finding(mut finding) => {
let sev_str = finding.severity.to_string().to_lowercase();
*severity_counts.entry(sev_str).or_insert(0) += 1;

// Enrich from local DB
if let Some(cve_id) = &finding.cve_id {
if let Ok(Some(row)) = sqlx::query("SELECT title, description FROM vulnerabilities WHERE cve_id = $1")
.bind(cve_id)
.fetch_optional(&state_task.db)
.await
{
use sqlx::Row;
finding.title = row.get("title");
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

13. Broadcasts pre-enrichment findings 🐞 Bug ≡ Correctness

The server broadcasts ScanEvent::Finding to SSE subscribers before enriching the finding from the
vulnerabilities DB, so clients and cached replays never see the enriched title/description.
Agent Prompt
### Issue description
SSE consumers receive stale/un-enriched findings because the broadcast happens before DB enrichment.

### Issue Context
The server enriches the finding title (and selects description) from `vulnerabilities`, but doesn’t broadcast the updated value.

### Fix Focus Areas
- crates/server/src/main.rs[176-222]

### Implementation notes
- Move broadcasting to after enrichment for `ScanEvent::Finding`, e.g.:
  - On `Finding(f)`: enrich `f`, then send `ScanEvent::Finding(f.clone())`.
  - For non-finding events, keep current broadcast behavior.
- If you want cached replay to reflect enrichment, cache the enriched event, not the original.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/src/routes/scan/+page.svelte (1)

1-296: 🛠️ Refactor suggestion | 🟠 Major

Component exceeds 200 line limit.

Per coding guidelines, component files should be max 200 lines. This file is ~309 lines. Consider extracting the finding card into a separate FindingCard.svelte component.

As per coding guidelines: "Component files max 200 lines — split into smaller components if larger".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 1 - 296, This Svelte page
is over the 200-line limit; extract the repeated findings rendering into a new
FindingCard.svelte component: move the entire markup inside the {`#each` findings
as finding ...} block (the container showing severity badge, title, description,
technical analysis, vulnerable_code and fixed_code sections) into
FindingCard.svelte as a component that accepts a single prop (e.g. export let
finding), and emit the same classes and structure; in the page replace the
{`#each` findings as finding ...} block with {`#each` findings as
finding}<FindingCard {finding} />{/each} and import FindingCard at top; also
move or export the getSeverityColor helper (or accept a computed class prop) so
the badge styling still works (either import getSeverityColor from a small util
or compute class inside FindingCard) and ensure any references to finding.id,
finding.line_start, finding.cve_id, etc., remain unchanged so behavior is
preserved.
🟡 Minor comments (15)
ISSUE_DRAFT.md-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Fix heading level jump at Line 9

The document starts body headings at ###, which triggers MD001 and hurts structure consistency. Use ## here (and keep subsequent headings at the same level unless nesting intentionally).

Suggested diff
-### Which area does this relate to?
+## Which area does this relate to?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ISSUE_DRAFT.md` at line 9, The heading "Which area does this relate to?" uses
level ### causing a markdown heading level jump and MD001; change that heading
to level ## (i.e., replace "### Which area does this relate to?" with "## Which
area does this relate to?") and ensure subsequent section headings remain
consistent (keep other headings at the same level unless intentionally nested)
so the document structure passes MD001 checks.
ISSUE_DRAFT.md-1-7 (1)

1-7: ⚠️ Potential issue | 🟡 Minor

Clarify template activation intent

The filename ISSUE_DRAFT.md indicates this is a draft. If this is meant to become an active GitHub issue template, it should be renamed (remove "DRAFT") and moved to .github/ISSUE_TEMPLATE/. If it remains a draft, the current location is appropriate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ISSUE_DRAFT.md` around lines 1 - 7, The ISSUE_DRAFT.md file name signals a
draft; decide whether this is an active issue template or a draft and act
accordingly: if you intend it to be an active GitHub issue template, rename the
file to remove "DRAFT" (e.g., ISSUE.md or a descriptive template name) and move
it into .github/ISSUE_TEMPLATE/ so GitHub recognizes it as an issue template; if
it should remain a draft, keep the current filename/location and add a short
note in the file (or in its frontmatter) indicating it's a draft to avoid
confusion.
apps/web/src/routes/settings/ai/+page.svelte-15-15 (1)

15-15: ⚠️ Potential issue | 🟡 Minor

Timer not cleaned up on component unmount.

successTimer is set but never cleared when the component is destroyed, which could cause a memory leak or unexpected state updates.

🔧 Proposed fix using $effect for cleanup
 <script lang="ts">
   import { fetchAiModels } from "$lib/api";
   import { aiConfig } from "$lib/stores/aiConfig.svelte";
+  import { onDestroy } from "svelte";

   // ... other state ...

   let saveSuccess     = $state(false);
   let successTimer: ReturnType<typeof setTimeout>;

+  onDestroy(() => {
+    clearTimeout(successTimer);
+  });

Also applies to: 53-54

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/ai/`+page.svelte at line 15, The declared
successTimer (let successTimer: ReturnType<typeof setTimeout>) is never cleared
on component unmount or before creating a new timer, which can leak or trigger
state updates after destroy; update the component to clear the timer by calling
clearTimeout(successTimer) in an onDestroy handler and also clear any existing
successTimer immediately before assigning a new setTimeout (e.g., in the success
path where successTimer is set) so you don't leave multiple active timers;
ensure the declaration of successTimer remains and use the Svelte onDestroy
lifecycle (or a $: reactive cleanup) to perform the clearTimeout.
apps/web/src/routes/settings/ai/+page.svelte-37-38 (1)

37-38: ⚠️ Potential issue | 🟡 Minor

Replace any with unknown for error handling.

As per coding guidelines: "TypeScript strict mode is ON — no any, no @ts-ignore".

🔧 Proposed fix
-    } catch (err: any) {
-      error = err.message || "Failed to fetch models. Check your API key and connection.";
+    } catch (err: unknown) {
+      error = err instanceof Error ? err.message : "Failed to fetch models. Check your API key and connection.";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/ai/`+page.svelte around lines 37 - 38, Replace
the catch parameter type from any to unknown in the catch block and safely
extract the message: change "catch (err: any)" to "catch (err: unknown)" and
inside the block normalize the error (e.g., use "err instanceof Error ?
err.message : String(err)") before assigning to the local error variable; update
the catch in the +page.svelte scope where "error = err.message || 'Failed to
fetch models...'" is set so you never access err.message on an unknown type.
apps/web/src/routes/settings/ai/+page.svelte-156-166 (1)

156-166: ⚠️ Potential issue | 🟡 Minor

Add key to available models {#each} block.

Use the model name m as the key since model IDs are unique.

🔧 Proposed fix
-              {`#each` availableModels as m}
+              {`#each` availableModels as m (m)}
                 <button
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/ai/`+page.svelte around lines 156 - 166, The
{`#each` availableModels as m} block is missing a key which can cause
rendering/rehydration bugs; update the Svelte each block to use the unique model
name as the key (use m) so it becomes {`#each` availableModels as m (m)} and keep
the rest of the rendering logic (button, selectedModel comparison, classes, and
inner conditional) unchanged.
apps/web/src/routes/history/+page.svelte-14-16 (1)

14-16: ⚠️ Potential issue | 🟡 Minor

Replace any with a specific error type.

Using any violates TypeScript strict mode. Use unknown and narrow the type safely.

As per coding guidelines: "TypeScript strict mode is ON — no any, no @ts-ignore".

🔧 Proposed fix
-    } catch (err: any) {
-      console.error("Failed to load history", err);
-      error = err.message || "An unexpected error occurred while loading your history.";
+    } catch (err: unknown) {
+      console.error("Failed to load history", err);
+      error = err instanceof Error ? err.message : "An unexpected error occurred while loading your history.";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/history/`+page.svelte around lines 14 - 16, The catch
clause currently types the caught value as `any` (catch (err: any)); change it
to `unknown` and then narrow it before using it: update the catch to `catch
(err: unknown)` and set the `error` variable to `err instanceof Error ?
err.message : String(err) || "An unexpected error occurred while loading your
history."`, and update the console.error call to log the narrowed value (or the
original err) so TypeScript strict mode is satisfied and runtime-safe checks are
performed in this +page.svelte error handling block.
apps/web/src/routes/history/+page.svelte-117-124 (1)

117-124: ⚠️ Potential issue | 🟡 Minor

Add key to severity counts {#each} block.

Use severity as the key since it's unique within each scan's severity counts.

🔧 Proposed fix
-                {`#each` Object.entries(scan.severity_counts) as [severity, count]}
+                {`#each` Object.entries(scan.severity_counts) as [severity, count] (severity)}
                   {`#if` count > 0}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/history/`+page.svelte around lines 117 - 124, The each
block iterating over Object.entries(scan.severity_counts) lacks a key which can
cause diffing/rendering issues; update the Svelte {`#each`} to provide a key using
the severity value (e.g., {`#each` Object.entries(scan.severity_counts) as
[severity, count] (severity)}) so each item in the loop is keyed, referencing
the scan.severity_counts iteration and getSeverityColor(severity) usage to
locate the block.
apps/web/src/routes/settings/ai/+page.svelte-89-97 (1)

89-97: ⚠️ Potential issue | 🟡 Minor

Add key to providers {#each} block.

Use p.id as the unique key for provider buttons.

🔧 Proposed fix
-              {`#each` providers as p}
+              {`#each` providers as p (p.id)}
                 <button
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/ai/`+page.svelte around lines 89 - 97, The
{`#each`} block rendering provider buttons lacks a unique key which can cause
incorrect DOM updates; update the Svelte each block for providers to use p.id as
the key (i.e., change the iterator from "{`#each` providers as p}" to include the
key "(p.id)"), leaving the button contents and click handler (setting provider,
availableModels, selectedModel, error) unchanged so Svelte can track list items
reliably.
apps/web/src/routes/settings/cve/+page.svelte-17-19 (1)

17-19: ⚠️ Potential issue | 🟡 Minor

Replace any with unknown for error handling.

As per coding guidelines: "TypeScript strict mode is ON — no any, no @ts-ignore".

🔧 Proposed fix
-    } catch (error: any) {
-      lastSyncStatus = "error";
-      statusMessage = error.message || "Failed to synchronize vulnerability databases.";
+    } catch (error: unknown) {
+      lastSyncStatus = "error";
+      statusMessage = error instanceof Error ? error.message : "Failed to synchronize vulnerability databases.";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/cve/`+page.svelte around lines 17 - 19, Change
the catch parameter from any to unknown in the try/catch around the sync logic
(the catch currently using "error: any"), then narrow the unknown before reading
.message: check if error is an instance of Error and set statusMessage =
error.message, otherwise set statusMessage = String(error) or a fallback "Failed
to synchronize vulnerability databases."; leave lastSyncStatus = "error" as-is.
This removes use of any while preserving safe error-message extraction for the
catch block that assigns lastSyncStatus and statusMessage.
crates/scanner/src/engine.rs-36-43 (1)

36-43: ⚠️ Potential issue | 🟡 Minor

Guard against division by zero when engines list is empty.

If config.engines is empty, total_engines will be 0, causing a division by zero at line 39 when calculating the progress percentage. While 0 / 0 in floating-point yields NaN (not a panic), it produces an invalid progress value.

🛡️ Proposed fix
 pub async fn run_stream(
     config: &ScanConfig,
     tx: UnboundedSender<ScanEvent>,
 ) -> anyhow::Result<Vec<RawFinding>> {
     let mut all_findings = Vec::new();
     let total_engines = config.engines.len();
+
+    if total_engines == 0 {
+        let _ = tx.send(ScanEvent::Progress {
+            percentage: 100,
+            message: "No engines configured.".to_string(),
+        });
+        return Ok(all_findings);
+    }

     for (i, engine) in config.engines.iter().enumerate() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/engine.rs` around lines 36 - 43, The code computes
progress using total_engines = config.engines.len() and divides by it, which
yields invalid progress when config.engines is empty; add a guard before the
loop to handle total_engines == 0 (or check inside the loop) and set a sensible
progress value (e.g., 100 or 0) and message so you never divide by zero: update
the logic around total_engines / the progress calculation used when building the
ScanEvent::Progress sent via tx.send so that when config.engines.is_empty() you
send a deterministic progress and message (e.g., percentage: 100, message: "No
engines to run") instead of performing the division.
apps/web/src/routes/history/+page.svelte-70-72 (1)

70-72: ⚠️ Potential issue | 🟡 Minor

Add key to {#each} block for skeleton items.

The linter flags missing keys. For skeleton placeholders, use the index as a key.

🔧 Proposed fix
-      {`#each` Array(5) as _}
+      {`#each` Array(5) as _, i (i)}
         <div class="h-24 glass rounded-2xl animate-pulse border-zinc-800/50"></div>
       {/each}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/history/`+page.svelte around lines 70 - 72, The {`#each`}
block rendering skeleton placeholders is missing a key, causing linter warnings;
update the block to capture the index (e.g., change "as _" to "as _, i") and
supply the index as the key expression (use the index in the each-key clause) so
each placeholder has a stable key for Reactivity and linting.
extensions/vscode/src/extension.ts-102-142 (1)

102-142: ⚠️ Potential issue | 🟡 Minor

SSE parsing may fail on multi-event chunks or split chunks.

The SSE parsing splits on \n and looks for lines starting with data: . However:

  1. A single chunk may contain multiple complete SSE events
  2. An SSE event may be split across chunks (partial data: line)

The current implementation handles case 1 correctly (iterates all lines), but case 2 could cause JSON parse failures for partial data. Consider buffering incomplete lines.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 102 - 142, The SSE reader
can receive partial events across chunks, so change the parsing loop in the
reader.read handling to maintain a persistent buffer string (e.g., let sseBuffer
= '') that you append decoder.decode(value, { stream: true }) to on each read,
then split the buffer by the SSE event delimiter (double newline '\n\n' or
'\r\n\r\n') to extract only complete events for JSON.parse and leave any
trailing partial event in sseBuffer for the next iteration; update the current
switch handling (cases 'progress','finding','complete','error') to consume only
fully parsed events and keep using findings, updateDiagnostics(document,
findings), and sidebarProvider.postMessage as before.
extensions/vscode/src/extension.ts-100-100 (1)

100-100: ⚠️ Potential issue | 🟡 Minor

Use const instead of let for the findings array.

The array reference is never reassigned; only its contents are modified via push. The linter correctly flags this.

-    let findings: Finding[] = [];
+    const findings: Finding[] = [];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` at line 100, Change the variable
declaration for the findings array from a mutable binding to an immutable
binding: replace the "let findings: Finding[] = [];" declaration with a const
binding ("const findings: Finding[] = [];") in extension.ts so the reference
cannot be reassigned while still allowing pushes to the array; update any code
that might reassign findings (if present) to instead mutate the array or use a
new variable name.
apps/web/src/routes/scan/+page.svelte-2-3 (1)

2-3: ⚠️ Potential issue | 🟡 Minor

Remove unused imports.

Both onMount and scan are imported but never used, as flagged by the linter. The scan functionality now uses direct fetch calls.

 <script lang="ts">
-  import { onMount } from "svelte";
-  import { scan, type Finding } from "$lib/api";
+  import { type Finding } from "$lib/api";
   import { aiConfig } from "$lib/stores/aiConfig.svelte";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 2 - 3, Remove the dead
imports to satisfy the linter: delete the unused imports onMount and scan from
the import line (currently importing onMount from "svelte" and scan from
"$lib/api"), leaving only the used symbol(s) such as type Finding if still
needed; update the import statement in +page.svelte to import only the actually
used exports (e.g., "import type { Finding } from '$lib/api'") or remove the
entire import line if nothing from those modules is referenced.
crates/server/src/main.rs-248-256 (1)

248-256: ⚠️ Potential issue | 🟡 Minor

Race condition: late subscribers may miss events if they connect between scan completion and cache population.

There's a brief window between when state_task.scans.remove(&scan_id) is called and state_task.results.insert(scan_id, all_events) where a subscriber connecting via subscribe_to_scan would get a 404 "Scan not found" error because the scan_id exists in neither map.

Suggested fix: insert to results before removing from scans
-        // Move results to cache so late SSE subscribers can replay them
-        state_task.scans.remove(&scan_id);
-        state_task.results.insert(scan_id, all_events);
+        // Move results to cache so late SSE subscribers can replay them
+        // Insert to results first to avoid race condition window
+        state_task.results.insert(scan_id, all_events);
+        state_task.scans.remove(&scan_id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 248 - 256, The race occurs because
state_task.scans.remove(&scan_id) runs before state_task.results.insert(scan_id,
all_events), so a subscriber calling subscribe_to_scan can see neither map and
get a 404; to fix, reverse the operations: insert the completed events into
state_task.results with state_task.results.insert(scan_id, all_events) first,
then remove from state_task.scans with state_task.scans.remove(&scan_id),
ensuring subscribe_to_scan will find results for late subscribers.
🧹 Nitpick comments (23)
migrations/202604040001_create_vulnerabilities_table.sql (2)

19-20: Redundant index on cve_id.

The UNIQUE constraint on cve_id (line 7) automatically creates an index. The explicit idx_vulnerabilities_cve_id index is redundant and wastes storage.

♻️ Remove redundant index
 -- Index for fast lookup by CVE ID
-CREATE INDEX IF NOT EXISTS idx_vulnerabilities_cve_id ON vulnerabilities(cve_id);
+-- Note: cve_id already has an implicit index from the UNIQUE constraint
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@migrations/202604040001_create_vulnerabilities_table.sql` around lines 19 -
20, The CREATE INDEX statement for idx_vulnerabilities_cve_id is redundant
because the UNIQUE constraint on vulnerabilities.cve_id already creates an
index; remove the line "CREATE INDEX IF NOT EXISTS idx_vulnerabilities_cve_id ON
vulnerabilities(cve_id);" (or drop the redundant index) to avoid duplicate
indexes and wasted storage while keeping the UNIQUE constraint on cve_id.

12-13: Default timestamps may mask missing source data.

published_at and last_modified_at default to CURRENT_TIMESTAMP, but these fields should reflect dates from the NVD/OSV data source. Consider making them nullable without defaults, or using a sentinel value, so missing source data is explicit rather than silently defaulting to insert time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@migrations/202604040001_create_vulnerabilities_table.sql` around lines 12 -
13, The published_at and last_modified_at columns currently default to
CURRENT_TIMESTAMP which hides missing source dates; update the migration so
published_at and last_modified_at do not use DEFAULT CURRENT_TIMESTAMP and are
nullable (or use a clear sentinel) so that absence of NVD/OSV source dates is
explicit; locate the column definitions for published_at and last_modified_at in
the migration and remove the DEFAULT CURRENT_TIMESTAMP clauses and ensure the
columns allow NULL (or set a documented sentinel value) and adjust any insertion
code that relied on the defaults to explicitly supply timestamps when available.
apps/web/package.json (1)

16-17: Consider removing unused @sveltejs/adapter-auto.

Since svelte.config.js now imports @sveltejs/adapter-node, the @sveltejs/adapter-auto dependency on line 16 is no longer used and could be removed to reduce dependencies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/package.json` around lines 16 - 17, Remove the unused dependency
entry "@sveltejs/adapter-auto" from apps/web's package.json (it is redundant
because svelte.config.js imports "@sveltejs/adapter-node"); delete the
"@sveltejs/adapter-auto" line in package.json and then run your package
manager's install/lockfile update (npm install / npm prune or yarn install) to
update lockfiles and ensure the dependency is removed from the lockfile.
crates/cli/src/main.rs (2)

127-127: Move imports to the top of the file.

The serde import at line 127 should be grouped with other imports at the top of the file for better code organization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` at line 127, Move the stray import `use
serde::{Deserialize, Serialize};` from line 127 into the file's top import block
where other `use ...` statements are declared; cut the line and place it with
the other imports (keeping project import ordering/alphabetization conventions
and avoiding duplicate serde imports) so `Deserialize` and `Serialize` are
imported alongside the rest of the module-level `use` declarations.

157-160: HOME environment variable is not cross-platform.

HOME is Unix-specific and won't exist on Windows, causing fallback to "." which writes config to the current directory. Consider using the dirs crate for cross-platform config paths.

♻️ Example using dirs crate
     fn get_path() -> std::path::PathBuf {
-        let home = std::env::var("HOME").unwrap_or_else(|_| ".".to_string());
-        std::path::PathBuf::from(home).join(".config/zenvra/config.json")
+        dirs::config_dir()
+            .unwrap_or_else(|| std::path::PathBuf::from("."))
+            .join("zenvra/config.json")
     }

This would require adding dirs = "5" to Cargo.toml.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` around lines 157 - 160, get_path currently reads the
Unix-only HOME env var and falls back to "."; change get_path to use a
cross-platform config directory (e.g., dirs::config_dir()) to locate the user
config folder, fall back to current_dir() only if config_dir() is None, and then
join the application subpath (e.g., "zenvra/config.json") so the file is placed
under the OS-specific config location; add the dirs = "5" dependency to
Cargo.toml and update the PathBuf construction in get_path accordingly.
crates/scanner/src/ai/anthropic.rs (1)

64-65: Add doc comment explaining the fallback behavior (if retained).

If the fallback is intentional (perhaps Anthropic's /v1/models endpoint is unreliable), the doc comment should warn callers that the returned list may be a static fallback rather than live API data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/ai/anthropic.rs` around lines 64 - 65, Update the doc
comment for the async function list_models(api_key: &str, endpoint:
Option<&str>) to document the fallback behavior: state whether the function
queries Anthropic’s /v1/models endpoint and, if it uses or may return a static
fallback when the API is unavailable, explicitly warn callers that results may
not be live API data; include when the fallback is used and any implications for
caching or staleness so callers can handle this behavior appropriately.
apps/web/Dockerfile (2)

21-23: Consider copying only production dependencies to reduce image size.

Copying the entire node_modules includes dev dependencies. For a smaller production image, consider running pnpm install --prod in a separate step or using pnpm deploy to create a production-only bundle.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/Dockerfile` around lines 21 - 23, The Dockerfile currently copies
the entire node_modules directory from the builder stage (COPY --from=builder
/app/node_modules ./node_modules), which pulls dev dependencies into the final
image; change the build pipeline to install or produce production-only
dependencies and copy those instead—e.g., in the builder stage run a production
install (pnpm install --prod or pnpm deploy to create a production bundle) and
then update the final stage to COPY the production node_modules or the deploy
output (replace COPY --from=builder /app/node_modules ./node_modules with
copying only the production install output or build artifact) so the final image
excludes devDependencies.

4-4: Pin pnpm version to match package.json.

The package.json specifies pnpm@10.24.0 but npm install -g pnpm installs the latest version, which could cause inconsistent builds.

♻️ Proposed fix
-RUN npm install -g pnpm
+RUN npm install -g pnpm@10.24.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/Dockerfile` at line 4, The Dockerfile currently runs "RUN npm
install -g pnpm" which installs the latest pnpm and can diverge from
package.json; pin pnpm to the package.json version by replacing that line with a
pinned install (e.g. use an ARG like "ARG PNPM_VERSION=10.24.0" and "RUN npm
install -g pnpm@$PNPM_VERSION") or hardcode "RUN npm install -g pnpm@10.24.0" so
the container uses the same pnpm version as package.json.
crates/scanner/src/ai/openai.rs (1)

73-101: LGTM — consider extracting duplicated endpoint normalization.

The list_models function correctly handles both /v1-suffixed and bare endpoints. However, the URL normalization logic (trim trailing slash, check for /v1 suffix) is duplicated between list_models (lines 76-82) and OpenAiProvider::call (lines 115-120).

♻️ Optional: Extract shared helper
fn normalize_endpoint(endpoint: &str, path: &str) -> String {
    let ep = endpoint.trim_end_matches('/');
    if ep.ends_with("/v1") {
        format!("{}/{}", ep, path)
    } else {
        format!("{}/v1/{}", ep, path)
    }
}

Then use:

-    let url = if ep.ends_with("/v1") {
-        format!("{}/models", ep)
-    } else {
-        format!("{}/v1/models", ep)
-    };
+    let url = normalize_endpoint(ep, "models");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/ai/openai.rs` around lines 73 - 101, The endpoint
normalization logic is duplicated between list_models and OpenAiProvider::call;
extract a small helper (e.g., normalize_endpoint or build_endpoint) that accepts
the raw endpoint &str and a path segment (like "models" or "chat/completions")
and returns the correctly normalized URL by trimming trailing slashes and
ensuring a single /v1 is present; then replace the inline logic in list_models
and OpenAiProvider::call to call this helper to build the final URL, keeping
existing error handling and header usage unchanged.
apps/web/src/routes/settings/ai/+page.svelte (1)

190-221: Component exceeds 200-line limit.

At 224 lines, this component slightly exceeds the guideline limit. The info panel (lines 190-221) could be extracted into a separate component.

As per coding guidelines: "Component files max 200 lines — split into smaller components if larger".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/ai/`+page.svelte around lines 190 - 221, The
component is over the 200-line limit; extract the info panel and the active
policies panel into their own Svelte components (e.g., BYOKInfo.svelte for the
"Bring Your Own Key" block and ActivePolicies.svelte for the "Active Policies"
block), move the corresponding markup into those files, export them as default
Svelte components, then import and use <BYOKInfo/> and <ActivePolicies/> inside
+page.svelte to replace the original divs; there are no props required so just
replace the markup with the new component tags and ensure any local classes or
assets referenced remain available.
apps/web/src/routes/history/+page.svelte (1)

132-134: Navigation button has no action.

The arrow button suggests navigation to scan details but lacks an onclick handler or href. If this is intentional for a future feature, consider adding a TODO comment or disabling the button.

Do you want me to help implement navigation to a scan details page, or should this be tracked as a separate issue?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/history/`+page.svelte around lines 132 - 134, The
navigation arrow button in +page.svelte has no action; add a navigation handler
(or make it an anchor) so the button goes to the scan details route (e.g.,
`/history/{scan.id}`) when clicked: either wrap the <button> in an <a
href="..."> pointing to the scan detail path or add an on:click that calls
SvelteKit's goto from $app/navigation (or dispatches a route change) using the
scan's unique id (e.g., scan.id or item.id) referenced in the surrounding loop;
if navigation is intentionally deferred, disable the button or add a TODO
comment explaining it's pending.
crates/server/src/cve_sync/mod.rs (1)

90-124: Consider batching database inserts for better performance.

Executing individual INSERT statements in a loop for potentially thousands of CVEs is inefficient. Consider using a batch insert or wrapping in a transaction.

♻️ Suggested batch approach
// Collect all values first
let mut tx = pool.begin().await?;

for item in nvd_data.vulnerabilities {
    // ... same upsert logic but using &mut tx instead of pool ...
    sqlx::query(/* ... */)
        .execute(&mut *tx)
        .await?;
}

tx.commit().await?;

Or use sqlx's QueryBuilder for true batch inserts with UNNEST arrays.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/cve_sync/mod.rs` around lines 90 - 124, The current loop
over nvd_data.vulnerabilities performs one sqlx::query INSERT per CVE which is
slow; change to collect rows then perform batched upserts by starting a
transaction (let mut tx = pool.begin().await?) and executing the queries against
&mut tx, or better use sqlx::QueryBuilder to build a single bulk INSERT ... ON
CONFLICT upsert using arrays/UNNEST; update the code that references
nvd_data.vulnerabilities and the sqlx::query call to use either tx.execute(...)
and commit once (tx.commit().await?) or replace with QueryBuilder to bind all
(cve_id, title, description, severity, data_source) tuples and execute in one
statement for improved performance.
apps/web/src/routes/settings/cve/+page.svelte (1)

46-54: Hardcoded sync statistics.

The "Synced CVEs" count (100+) and progress bar are static placeholders. Consider fetching actual statistics from the backend for a more informative UI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/cve/`+page.svelte around lines 46 - 54, The UI
currently shows hardcoded sync stats ("Synced CVEs" 100+ and progress bar
w-2/5); replace these placeholders by fetching real stats from the backend
(e.g., implement a +page.server.ts load() or client-side onMount() that calls
your CVE stats API endpoint) and bind the returned values to the displayed count
span and to a reactive width/percentage used instead of the static "w-2/5"
class; update the JSX in +page.svelte to render the dynamic count and compute
the progress bar width/style from the fetched percent so the UI reflects real
backend data.
crates/server/src/main.rs (3)

185-199: Finding enrichment overwrites title but doesn't use fetched description.

The code fetches both title and description from the vulnerabilities table but only uses title. The description might be valuable for enriching findings. Also, the query selects description but doesn't use it.

Suggested fix to use description
                     if let Ok(Some(row)) = sqlx::query("SELECT title, description FROM vulnerabilities WHERE cve_id = $1")
                         .bind(cve_id)
                         .fetch_optional(&state_task.db)
                         .await 
                     {
                         use sqlx::Row;
                         finding.title = row.get("title");
+                        if let Ok(desc) = row.try_get::<Option<String>, _>("description") {
+                            if finding.description.is_none() {
+                                finding.description = desc;
+                            }
+                        }
                     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 185 - 199, ScanEvent::Finding
enrichment currently assigns only finding.title from the DB row even though the
query selects description; update the enrichment block handling the sqlx::query
for vulnerabilities (inside ScanEvent::Finding where cve_id is bound) to also
assign finding.description = row.get("description") (and ensure you keep
finding.title = row.get("title") too), using the same sqlx::Row import to
retrieve both fields so the fetched description is applied to the finding.

99-102: CORS is overly permissive with allow_origin(Any).

Using Any for allowed origins in CORS exposes the API to cross-origin requests from any domain. For a security scanner API, consider restricting to known dashboard origins in production.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 99 - 102, The CORS configuration is
overly permissive: replace the use of allow_origin(Any) on the CorsLayer created
with CorsLayer::new() so production only permits trusted dashboard origins;
update the CorsLayer instantiation (the code using CorsLayer::new(),
.allow_origin(Any), .allow_methods(Any), .allow_headers(Any)) to use a
restricted Origin or Origin list (e.g., a list built from configuration/env)
instead of Any, and ensure the new origin list is used when building the
middleware so only the allowed dashboard origins are accepted while retaining
.allow_methods(Any) and .allow_headers(Any) as needed.

270-295: SSE events are sent without an explicit event name.

All ScanEvents are serialized via .json_data() without calling .event() to set the SSE event type. This means all events arrive as the default "message" event type.

The Svelte page handles this correctly by using onmessage and parsing the JSON to check data.type. The VS Code extension also parses the JSON type field from data: lines. Both clients work correctly with this format.

However, for cleaner SSE semantics and easier client-side handling, consider extracting the event type:

Optional improvement
fn scan_event_to_sse(event: &ScanEvent) -> Event {
    let event_type = match event {
        ScanEvent::Progress { .. } => "progress",
        ScanEvent::Finding(_) => "finding",
        ScanEvent::Complete => "complete",
        ScanEvent::Error(_) => "error",
    };
    Event::default()
        .event(event_type)
        .json_data(event)
        .unwrap_or_else(|_| Event::default())
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 270 - 295, The SSEs are being sent
without explicit event names (only Event::json_data is used), so add an event
name based on ScanEvent variants and use it when building the Event; implement a
helper like scan_event_to_sse(&ScanEvent) -> Event that matches on ScanEvent
(e.g., Progress -> "progress", Finding -> "finding", Complete -> "complete",
Error -> "error") and returns
Event::default().event(event_type).json_data(event).unwrap_or_else(|_|
Event::default()), then replace the anonymous closures in both branches (the
cached branch mapping and the BroadcastStream mapping where .json_data(event) is
used) to call this helper so all SSEs include .event(...) as well as
.json_data(...).
extensions/vscode/src/sidebarProvider.ts (3)

46-50: Type the message parameter instead of using any.

The static analysis correctly flags the any type. Define a union type for the expected message shapes to improve type safety.

Suggested fix
+type SidebarMessage = 
+  | { type: 'progress'; data: { percentage: number; message: string } }
+  | { type: 'finding'; data: { severity: string; title: string } }
+  | { type: 'complete' };

-  public postMessage(message: any) {
+  public postMessage(message: SidebarMessage) {
     if (this._view) {
       this._view.webview.postMessage(message);
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/sidebarProvider.ts` around lines 46 - 50, The
postMessage method uses an untyped any for its message parameter; replace it
with a specific union type that models all expected message shapes (e.g., an
interface/union like SidebarMessage = | { type: 'init' } | { type: 'update',
payload: UpdatePayload } | ...), then update the method signature public
postMessage(message: SidebarMessage) and ensure callers produce values
conforming to SidebarMessage; keep the existing null-check on this._view and
continue to call this._view.webview.postMessage(message) so the runtime behavior
is unchanged.

52-52: Prefix unused parameter with underscore.

The webview parameter is unused. Prefix it with _ to satisfy the linter rule requiring unused args to match /^_/u.

Suggested fix
-  private _getHtmlForWebview(webview: vscode.Webview) {
+  private _getHtmlForWebview(_webview: vscode.Webview) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/sidebarProvider.ts` at line 52, The parameter `webview`
in the `_getHtmlForWebview` method is unused and must be prefixed with an
underscore to satisfy the linter; rename the parameter to `_webview` in the
`private _getHtmlForWebview(webview: vscode.Webview)` signature (i.e., `private
_getHtmlForWebview(_webview: vscode.Webview)`) and ensure there are no remaining
references to `webview` inside the method (update any occurrences if present).

53-218: Consider adding a Content Security Policy (CSP) to the webview HTML.

The webview enables scripts (enableScripts: true) but lacks a CSP meta tag. While localResourceRoots restricts resource loading, a CSP adds defense-in-depth against XSS within the webview.

Suggested CSP addition
 			<head>
 				<meta charset="UTF-8">
+				<meta http-equiv="Content-Security-Policy" content="default-src 'none'; style-src 'unsafe-inline'; script-src 'unsafe-inline';">
 				<meta name="viewport" content="width=device-width, initial-scale=1.0">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/sidebarProvider.ts` around lines 53 - 218, The webview
HTML lacks a Content Security Policy which is risky since scripts are enabled
(enableScripts: true) and the UI uses acquireVsCodeApi() and inline scripts;
generate a cryptographic nonce in the sidebar provider (where the HTML is
produced), pass that nonce into the HTML template, add a meta CSP tag like
default-src 'none'; script-src 'nonce-<nonce-value>'; style-src 'unsafe-inline'
(or 'nonce-<nonce-value>' if you prefer nonces for styles); and ensure the
inline <script> tag and any injected resources use that same nonce so only your
trusted script can run (update the template-returning function that emits the
HTML and the place that calls enableScripts).
extensions/vscode/src/extension.ts (1)

98-98: Replace any types with proper types.

Line 98 uses as any for the ReadableStream reader, and line 143 uses any for the error. These can be properly typed.

Suggested fixes
-    const reader = (body as any).getReader();
+    const reader = body.getReader();

For the catch block, use unknown and narrow:

-  } catch (err: any) {
-    vscode.window.showErrorMessage(`Zenvra Scan Failed: ${err.message}`);
+  } catch (err: unknown) {
+    const message = err instanceof Error ? err.message : String(err);
+    vscode.window.showErrorMessage(`Zenvra Scan Failed: ${message}`);

Also applies to: 143-143

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` at line 98, Replace the ad-hoc any usages
by correctly typing the stream reader and caught errors: when creating the
reader from (body as any).getReader(), cast body to ReadableStream<Uint8Array>
(or appropriate chunk type) and declare reader with the type
ReadableStreamDefaultReader<Uint8Array> (refer to the reader variable assignment
and getReader() call); in the catch block replace error: any with error: unknown
and then narrow it (e.g., check instanceof Error or typeof message checks)
before logging or using error.message (refer to the catch where the error is
handled on line ~143) so all uses are type-safe and no any remains.
crates/scanner/src/lib.rs (1)

94-111: Spawned task errors may be silently lost.

If scan_stream returns an error after sending ScanEvent::Error, the error propagates via the channel. However, if the spawned task panics or if scan_stream returns Err before sending any event, the caller receives no events and exits cleanly with an empty Vec. Consider joining the task or ensuring errors are always sent to the channel.

Additionally, ScanEvent::Progress events are silently discarded in scan(). If this is intentional (since scan is batch mode), the code is correct but a comment would clarify intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/lib.rs` around lines 94 - 111, The scan function spawns
scan_stream into a detached task so panics or early Errs can be lost and
ScanEvent::Progress is ignored without explanation; update scan to keep the
JoinHandle (from tokio::spawn) and await/join it after draining rx so any task
panic or returned Err is observed and converted into an Err from scan (handle
tokio::task::JoinError and propagate underlying scan_stream error), and modify
the spawn block to ensure any early errors are sent into tx (or propagate them
via the JoinHandle) so they aren't dropped; also either handle or add a brief
comment explaining that ScanEvent::Progress is intentionally ignored in scan()
(batch mode) and reference the functions/variants scan, scan_stream,
ScanEvent::Error and ScanEvent::Progress so reviewers can find the changes.
apps/web/src/lib/api.ts (1)

63-67: Consider adding Zod schemas for API response validation.

Per coding guidelines, "use zod for all form and API input validation." The API responses are cast directly via res.json() without runtime validation. If the server returns malformed data, TypeScript won't catch it at runtime.

Example with Zod validation
import { z } from 'zod';

const ScanHistorySchema = z.object({
  id: z.string(),
  language: z.string(),
  target_name: z.string().optional(),
  findings_count: z.number(),
  severity_counts: z.record(z.string(), z.number()),
  created_at: z.string(),
});

export async function getHistory(): Promise<ScanHistory[]> {
  const res = await fetch(`${BASE_URL}/api/v1/history`);
  if (!res.ok) throw new Error('Failed to fetch scan history');
  const data = await res.json();
  return z.array(ScanHistorySchema).parse(data);
}

As per coding guidelines: "Use zod for all form and API input validation".

Also applies to: 72-76, 81-96

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/lib/api.ts` around lines 63 - 67, The getHistory function
returns res.json() without runtime validation; add a Zod schema (e.g.,
ScanHistorySchema using z.object(...) matching the ScanHistory type) and import
z from 'zod', then parse the response with
z.array(ScanHistorySchema).parse(data) before returning so malformed responses
throw a validation error; apply the same pattern to the other API functions in
this file that call res.json() (referenced by their function names such as
getHistory and the other fetch helpers) and include a clear error message when
the fetch fails or parsing throws.
extensions/vscode/package.json (1)

116-120: Consider marking aiApiKey as a secret setting.

The AI API key setting stores sensitive credentials. While VS Code settings are stored locally, consider using "scope": "machine" or documenting that users should prefer zenvra.setApiToken command which uses the secrets API.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/package.json` around lines 116 - 120, The setting
"zenvra.aiApiKey" stores sensitive credentials; update its contribution in
package.json to set "scope": "machine" to avoid per-workspace storage and adjust
the description to advise users to prefer the zenvra.setApiToken command (which
uses VS Code SecretStorage) for storing secrets; locate the "zenvra.aiApiKey"
entry in the contributes.configuration block and add the "scope": "machine"
property and a short note in "description" about using
zenvra.setApiToken/SecretStorage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c7b73b40-eb52-4ef6-87b4-001f77beaf85

📥 Commits

Reviewing files that changed from the base of the PR and between 1813a61 and fe140bf.

⛔ Files ignored due to path filters (3)
  • Cargo.lock is excluded by !**/*.lock
  • apps/web/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
  • extensions/vscode/assets/icon.png is excluded by !**/*.png
📒 Files selected for processing (34)
  • .github/workflows/release.yml
  • .gitignore
  • Cargo.toml
  • ISSUE_DRAFT.md
  • apps/web/Dockerfile
  • apps/web/package.json
  • apps/web/src/lib/api.ts
  • apps/web/src/lib/stores/aiConfig.svelte.ts
  • apps/web/src/routes/+layout.svelte
  • apps/web/src/routes/history/+page.svelte
  • apps/web/src/routes/scan/+page.svelte
  • apps/web/src/routes/settings/ai/+page.svelte
  • apps/web/src/routes/settings/cve/+page.svelte
  • apps/web/svelte.config.js
  • crates/cli/src/main.rs
  • crates/scanner/src/ai/anthropic.rs
  • crates/scanner/src/ai/google.rs
  • crates/scanner/src/ai/mod.rs
  • crates/scanner/src/ai/openai.rs
  • crates/scanner/src/engine.rs
  • crates/scanner/src/finding.rs
  • crates/scanner/src/lib.rs
  • crates/server/Cargo.toml
  • crates/server/Dockerfile
  • crates/server/src/cve_sync/mod.rs
  • crates/server/src/main.rs
  • docker-compose.yml
  • extensions/vscode/package.json
  • extensions/vscode/src/extension.ts
  • extensions/vscode/src/sidebarProvider.ts
  • extensions/vscode/src/types.ts
  • extensions/vscode/zenvra-0.1.1.vsix
  • migrations/202604040001_create_vulnerabilities_table.sql
  • migrations/202604040002_create_scans_table.sql

Comment on lines +1 to +4
# Stage 1: Build
FROM node:20-slim AS builder
WORKDIR /app
RUN npm install -g pnpm
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security: Container runs as root user.

The static analysis tool correctly flagged that no non-root user is specified. Running as root in production containers is a security risk. Add a non-root user in the runtime stage.

🔒 Proposed fix to add non-root user
 # Stage 2: Runtime
 FROM node:20-slim
 WORKDIR /app
+RUN addgroup --system --gid 1001 nodejs && \
+    adduser --system --uid 1001 --ingroup nodejs nodeuser
 COPY --from=builder /app/build ./build
 COPY --from=builder /app/package.json ./package.json
 COPY --from=builder /app/node_modules ./node_modules
+RUN chown -R nodeuser:nodejs /app
+USER nodeuser
 
 EXPOSE 3000
 ENV NODE_ENV=production
🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/Dockerfile` around lines 1 - 4, The Dockerfile runs as root; add a
non-root runtime user and switch to it in the final stage: create a dedicated
user/group (e.g., app or appuser with a stable UID), chown the app workdir and
any runtime-needed files to that user, and add a USER app (or chosen name)
before the image is finalized. Ensure these steps occur in the final/runtime
stage (not the builder) and reference the existing WORKDIR /app and any
artifacts produced by the builder so the non-root user owns /app at runtime.

Comment on lines +26 to +32
if (browser) {
localStorage.setItem('zenvra_ai_provider', p);
localStorage.setItem('zenvra_ai_model', m);
localStorage.setItem('zenvra_ai_api_key', key);
if (ep) localStorage.setItem('zenvra_ai_endpoint', ep);
else localStorage.removeItem('zenvra_ai_endpoint');
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

API keys stored in cleartext localStorage.

The static analysis correctly flags that storing API keys in localStorage exposes them to XSS attacks. While this is a deliberate "Bring Your Own Key" design, the Settings UI (line 213 in +page.svelte) claims "Encrypted Local Storage" which is misleading—the keys are stored in plaintext.

Consider either:

  1. Removing the "Encrypted Local Storage" claim from the UI
  2. Actually encrypting the stored values (e.g., using Web Crypto API with a user-derived key)
🧰 Tools
🪛 ast-grep (0.42.0)

[warning] 28-28: Sensitive information detected in localStorage. Storing sensitive data like emails, usernames, or personal information in localStorage exposes it to malicious scripts and XSS attacks. Use secure storage alternatives or avoid storing sensitive data client-side.
Context: localStorage.setItem('zenvra_ai_api_key', key)
Note: [CWE-312] Cleartext Storage of Sensitive Information [REFERENCES]
- https://cwe.mitre.org/data/definitions/312.html
- https://owasp.org/www-community/vulnerabilities/HTML5_Local_Storage_Security

(local-storage-sensitive-information)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/lib/stores/aiConfig.svelte.ts` around lines 26 - 32, The code
stores API keys in cleartext via localStorage.setItem (variables p, m, key, ep
in aiConfig.svelte.ts), which contradicts the Settings UI claim "Encrypted Local
Storage"; either remove that UI claim or implement real encryption: add helper
functions (e.g., encryptValue/decryptValue or setEncryptedItem/getEncryptedItem)
that use the Web Crypto API with a user-derived key (PBKDF2 from a passphrase)
to encrypt values before calling localStorage.setItem and to decrypt on read,
update the write paths that call localStorage.setItem('zenvra_ai_api_key'...)
and localStorage.setItem('zenvra_ai_endpoint'...) to use the encrypted helpers,
and ensure reading paths decrypt and handle missing/legacy plaintext values for
backward compatibility.

Comment on lines 34 to 48
const response = await fetch("http://localhost:8080/api/v1/scan", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
code,
language: "python",
engines: ["sast", "secrets"],
ai_config: hasAiConfig ? {
provider: aiConfig.provider,
api_key: aiConfig.apiKey,
model: aiConfig.model,
endpoint: aiConfig.endpoint || undefined,
} : undefined
})
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded API URL should use environment variable or api.ts.

The scan page uses hardcoded http://localhost:8080 for the API URL instead of using the BASE_URL from $lib/api.ts. This breaks production deployments and violates the coding guideline that "All fetch calls must go through typed API client functions in apps/web/src/lib/api.ts."

Suggested approach

Consider exporting BASE_URL from api.ts or creating a startScan helper that returns the scan_id, then construct the SSE URL from it:

// In api.ts
export const BASE_URL = (import.meta.env.PUBLIC_API_URL || 'http://localhost:8080').replace(/\/$/, '');

export async function startScan(req: ScanRequest): Promise<string> {
  const res = await fetch(`${BASE_URL}/api/v1/scan`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify(req)
  });
  if (!res.ok) throw new Error("Failed to start scan");
  const { scan_id } = await res.json();
  return scan_id;
}

As per coding guidelines: "All fetch calls must go through typed API client functions in apps/web/src/lib/api.ts".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 34 - 48, The fetch to
"http://localhost:8080/api/v1/scan" is hardcoded; replace it by calling a typed
API client in apps/web/src/lib/api.ts (export BASE_URL and/or add a
startScan(req: ScanRequest): Promise<string> helper that posts the scan request
and returns scan_id). Update the +page.svelte code to call startScan with the
same payload (code, language, engines, ai_config) and use the returned scan_id
to construct any SSE URLs (using exported BASE_URL if needed); ensure startScan
throws on non-ok responses so the page can handle errors.

Comment on lines +17 to +21
# Stage 2: Runtime
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/target/release/zenvra-server /usr/local/bin/zenvra-server
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Runtime container runs as root — add a non-root user.

The runtime stage lacks a USER directive, so the container runs as root. This is a security concern flagged by static analysis (DS-0002). Additionally, apt-get install should use --no-install-recommends to minimize image size and attack surface (DS-0029).

🔒 Proposed fix to run as non-root user and optimize apt-get
 # Stage 2: Runtime
 FROM debian:bookworm-slim
-RUN apt-get update && apt-get install -y libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y --no-install-recommends libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*
+RUN useradd --create-home --shell /bin/bash zenvra
 WORKDIR /app
 COPY --from=builder /app/target/release/zenvra-server /usr/local/bin/zenvra-server
+USER zenvra
 
 EXPOSE 8080
 CMD ["zenvra-server"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Stage 2: Runtime
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/target/release/zenvra-server /usr/local/bin/zenvra-server
# Stage 2: Runtime
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*
RUN useradd --create-home --shell /bin/bash zenvra
WORKDIR /app
COPY --from=builder /app/target/release/zenvra-server /usr/local/bin/zenvra-server
USER zenvra
🧰 Tools
🪛 Trivy (0.69.3)

[error] 19-19: 'apt-get' missing '--no-install-recommends'

'--no-install-recommends' flag is missed: 'apt-get update && apt-get install -y libssl3 ca-certificates curl && rm -rf /var/lib/apt/lists/*'

Rule: DS-0029

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/Dockerfile` around lines 17 - 21, Runtime stage currently runs
as root and uses apt-get without --no-install-recommends; create a non-root
user/group (e.g., zenvra or appuser), chown the application binary
(/usr/local/bin/zenvra-server) and any WORKDIR (/app) to that user, add a USER
directive to switch to that non-root user, and change the apt-get install
invocation in the runtime stage to use --no-install-recommends to reduce
packages; ensure these changes are applied in the Stage 2 block that defines
FROM debian:bookworm-slim and references /usr/local/bin/zenvra-server so the
runtime container no longer runs as root and has a minimal installed footprint.

Comment on lines +60 to +88
async fn sync_nvd(pool: &Pool<Postgres>, client: &Client) -> anyhow::Result<()> {
let api_key = env::var("NVD_API_KEY").ok();
if api_key.is_none() {
info!("NVD_API_KEY not set. Running in rate-limited mode.");
}

let params = vec![("resultsPerPage", "100".to_string())];
let url = reqwest::Url::parse_with_params("https://services.nvd.nist.gov/rest/json/cves/2.0", &params)?;

info!("Calling NVD API: {}", url);

let mut request = client
.get(url)
.header("User-Agent", "Zenvra-Scanner/0.1.0");

if let Some(key) = api_key {
request = request.header("apiKey", key);
}

let response: reqwest::Response = request.send().await?;

if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_else(|_| "Empty body".to_string());
error!("NVD API error (Status: {}): {}", status, body);
anyhow::bail!("NVD API returned error status: {}", status);
}

let nvd_data = response.json::<NvdResponse>().await?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

NVD sync only fetches first 100 CVEs — pagination missing.

The NVD API contains 200k+ vulnerabilities, but this implementation only fetches the first page (resultsPerPage=100). The NVD API returns totalResults and supports startIndex for pagination.

🔧 Suggested approach for pagination
async fn sync_nvd(pool: &Pool<Postgres>, client: &Client) -> anyhow::Result<()> {
    let api_key = env::var("NVD_API_KEY").ok();
    let mut start_index = 0;
    let page_size = 2000; // NVD max is 2000
    
    loop {
        let params = vec![
            ("resultsPerPage", page_size.to_string()),
            ("startIndex", start_index.to_string()),
        ];
        let url = reqwest::Url::parse_with_params(
            "https://services.nvd.nist.gov/rest/json/cves/2.0", 
            &params
        )?;
        
        // ... fetch and process ...
        
        let total_results = nvd_data.total_results; // Add to NvdResponse struct
        start_index += page_size;
        if start_index >= total_results {
            break;
        }
        
        // Respect rate limits (6 req/min without key, 50/30s with key)
        tokio::time::sleep(Duration::from_secs(if api_key.is_some() { 1 } else { 10 })).await;
    }
    Ok(())
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/cve_sync/mod.rs` around lines 60 - 88, sync_nvd currently
only fetches the first 100 results; change it to paginate using startIndex and
resultsPerPage in a loop: add a mutable start_index (0) and choose page_size (up
to 2000), build the URL with both ("resultsPerPage", page_size) and
("startIndex", start_index), send the request, parse into NvdResponse (extend
the NvdResponse struct to include totalResults/total_results), process each
page, then increment start_index by page_size and break once start_index >=
totalResults; also add a delay between requests based on whether NVD_API_KEY is
set to respect rate limits. Ensure you still include the existing User-Agent and
apiKey header logic and keep the same error handling for non-success responses.

Comment on lines +12 to +17
export interface ScanRequest {
code: string;
language: string;
engines?: string[];
aiConfig?: AiConfig;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check ScanRequest struct definition in server
ast-grep --pattern 'struct ScanRequest {
  $$$
}'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 392


🏁 Script executed:

#!/bin/bash
# Read the context around the ScanRequest struct to see attributes
sed -n '35,50p' crates/server/src/main.rs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 496


🏁 Script executed:

#!/bin/bash
# Find AiConfig struct definition and its fields
ast-grep --pattern 'struct AiConfig {
  $$$
}'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

#!/bin/bash
# Search for serde attributes on ScanRequest or ai_config field
rg -A 5 'struct ScanRequest' crates/server/src/main.rs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 218


🏁 Script executed:

#!/bin/bash
# Find AiConfig struct in the scanner crate
find . -name "*.rs" -type f | xargs rg -l "struct AiConfig"

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 107


🏁 Script executed:

#!/bin/bash
# Get AiConfig struct definition and fields
rg -A 10 "struct AiConfig" crates/scanner/src/ai/

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 736


🏁 Script executed:

#!/bin/bash
# Get full AiConfig struct definition with derives
sed -n '1,50p' crates/scanner/src/ai/mod.rs | tail -30

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 994


🏁 Script executed:

#!/bin/bash
# Search for pub struct AiConfig with more context to see derives
rg -B 3 "pub struct AiConfig" crates/scanner/src/ai/mod.rs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 287


🏁 Script executed:

#!/bin/bash
# Also verify TypeScript interface - check if aiConfig field exists in types.ts
rg "aiConfig" extensions/vscode/src/types.ts

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 99


Critical: Field name mismatch causes AI config to be silently ignored.

The VS Code extension sends aiConfig and apiKey (camelCase), but the server's ScanRequest and AiConfig structs use ai_config and api_key (snake_case) without #[serde(rename_all = "camelCase")]. During deserialization, these fields will not match and Serde will silently ignore them, disabling AI-powered scanning.

Either rename the TypeScript fields to snake_case, or add #[serde(rename_all = "camelCase")] to the server structs.

🐛 Option 1: Fix TypeScript to use snake_case
 export interface AiConfig {
   provider: string;
-  apiKey: string;
+  api_key: string;
   model: string;
   endpoint?: string;
 }
 
 export interface ScanRequest {
   code: string;
   language: string;
   engines?: string[];
-  aiConfig?: AiConfig;
+  ai_config?: AiConfig;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/types.ts` around lines 12 - 17, The server-side structs
ScanRequest and AiConfig are using snake_case fields (ai_config, api_key) while
the client sends camelCase (aiConfig, apiKey), so add #[serde(rename_all =
"camelCase")] to the ScanRequest and AiConfig struct definitions (or
alternatively rename the TypeScript fields to snake_case) to ensure Serde
correctly maps aiConfig -> ai_config and apiKey -> api_key and prevent the AI
config from being ignored.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
crates/server/src/cve_sync/mod.rs (1)

66-88: ⚠️ Potential issue | 🟠 Major

This still stops after a single NVD page.

resultsPerPage is fixed at 100 and the job never advances through additional pages, so a successful sync only ingests that first slice of CVEs. That leaves the table materially incomplete and misses most lookup hits until pagination is implemented.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/cve_sync/mod.rs` around lines 66 - 88, The code only
fetches a single page because params is fixed to resultsPerPage=100 and there is
no pagination loop; modify the logic around
params/url/request/response/NvdResponse to implement pagination: add a
startIndex (or equivalent offset) param and wrap the request/send/json steps in
a loop that updates startIndex each iteration, accumulating CVE items from each
NvdResponse; stop when you've fetched all results (use totalResults from
NvdResponse or when returned items length is zero or less than resultsPerPage)
and handle rate limits between requests as needed.
🧹 Nitpick comments (4)
apps/web/src/routes/scan/+page.svelte (4)

83-87: SSE error handler should update scanStatus for user feedback.

When the SSE connection fails, the user only sees the scan stop without explanation. Setting scanStatus to an error message would improve UX consistency with the case 'error' branch.

♻️ Proposed enhancement
       eventSource.onerror = () => {
         console.error("SSE connection failed");
+        scanStatus = "Connection lost — please retry";
         isScanning = false;
         eventSource.close();
       };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 83 - 87, The SSE onerror
handler currently only logs, sets isScanning = false and closes eventSource;
update it to also set scanStatus to an error message (similar to the existing
case 'error' branch) so users see a descriptive failure state; modify the
eventSource.onerror callback to assign scanStatus = "SSE connection failed" (or
reuse the same error text used in the 'error' case) before setting isScanning =
false and closing eventSource.

36-56: Fetch calls should go through typed API client in $lib/api.ts.

The direct fetch() call and EventSource URL construction bypass the typed API client. While the URL is now correctly sourced from aiConfig.apiBaseUrl (addressing the hardcoded URL concern), the coding guideline requires all fetch calls to go through typed API client functions.

Consider adding startScan and exporting BASE_URL from api.ts:

♻️ Suggested api.ts additions
// In apps/web/src/lib/api.ts
export const BASE_URL = (import.meta.env.PUBLIC_API_URL || 'http://localhost:8080').replace(/\/$/, '');

export interface ScanRequest {
  code: string;
  language: string;
  engines: string[];
  ai_config?: {
    provider: string;
    api_key: string;
    model: string;
    endpoint?: string;
  };
}

export async function startScan(req: ScanRequest): Promise<string> {
  const res = await fetch(`${BASE_URL}/api/v1/scan`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify(req)
  });
  if (!res.ok) throw new Error("Failed to start scan");
  const { scan_id } = await res.json();
  return scan_id;
}

export function getScanEventsUrl(scanId: string): string {
  return `${BASE_URL}/api/v1/scan/${scanId}/events`;
}

As per coding guidelines: "All fetch calls must go through typed API client functions in apps/web/src/lib/api.ts".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 36 - 56, Replace the
direct fetch and EventSource usage in +page.svelte with the typed API client
functions: add/export BASE_URL, the ScanRequest type, a startScan(req:
ScanRequest): Promise<string> that POSTs to `${BASE_URL}/api/v1/scan` and
returns scan_id, and a getScanEventsUrl(scanId: string): string that builds
`${BASE_URL}/api/v1/scan/${scanId}/events`; then in +page.svelte call
startScan(...) to obtain scan_id and use getScanEventsUrl(scan_id) to create the
EventSource, removing the inline fetch and URL construction and ensuring the
ai_config shape matches the ScanRequest type.

58-81: SSE event data should be validated with zod.

The JSON.parse(event.data) result is used directly without type validation. Per coding guidelines, API input should be validated with zod to ensure type safety and catch malformed server responses gracefully.

♻️ Suggested validation schema
// In api.ts or a shared schema file
import { z } from 'zod';

const ScanEventSchema = z.discriminatedUnion('type', [
  z.object({ type: z.literal('progress'), data: z.object({ percentage: z.number(), message: z.string() }) }),
  z.object({ type: z.literal('finding'), data: FindingSchema }), // assuming FindingSchema exists
  z.object({ type: z.literal('complete'), data: z.unknown() }),
  z.object({ type: z.literal('error'), data: z.string() }),
]);

// Usage in onmessage handler:
const parsed = ScanEventSchema.safeParse(JSON.parse(event.data));
if (!parsed.success) {
  console.error("Invalid event data", parsed.error);
  return;
}
const data = parsed.data;

As per coding guidelines: "Use zod for all form and API input validation".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 58 - 81, Validate SSE
payloads before use: create a zod discriminated union (e.g., ScanEventSchema)
that covers the four event types (progress with
{percentage:number,message:string}, finding with data matching FindingSchema,
complete (unknown), and error (string)), export it from a shared schema file
(api.ts) and in the eventSource.onmessage handler call
ScanEventSchema.safeParse(JSON.parse(event.data)); if parsing fails, log the
parsed.error and return early, otherwise use parsed.data in the existing switch
logic (referencing eventSource.onmessage and FindingSchema to locate where to
wire the schema).

107-298: Component exceeds 200-line limit.

At ~300 lines, this file exceeds the coding guideline maximum of 200 lines. Consider extracting the progress panel (lines 183-217) and finding card (lines 227-291) into separate components.

As per coding guidelines: "Component files max 200 lines — split into smaller components if larger".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/scan/`+page.svelte around lines 107 - 298, This file is
over the 200-line limit; extract the live progress panel (the block using
scanProgress and scanStatus with the circular and linear progress visuals) into
a new ProgressPanel component and extract the finding card (the repeated {`#each`
findings as finding ...} block that references finding, getSeverityColor, and
displays description/explanation/vulnerable_code/fixed_code) into a new
FindingCard component; update +page.svelte to import and render <ProgressPanel
{scanProgress} {scanStatus} {isScanning}/> where the original progress block was
and replace the finding card markup with <FindingCard {finding}
getSeverityColor={getSeverityColor}/> (or pass only needed props), ensure
FindingCard accepts props for finding and uses getSeverityColor or a mapped
color prop, move any internal logic/conditionals (e.g., description !==
explanation) into the new component, and keep all styling/classes intact so
behavior and appearance remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 156-173: The CI workflow's Docker steps "Build and push API
(latest)" and "Build and push Web (latest)" push :latest tags that conflict with
the release.yml docker-publish job; update these steps to avoid the race by
either (A) changing the tags from ghcr.io/${{ github.repository }}-api:latest
and -web:latest to a CI-specific tag like :main or :edge, or (B) adding a
condition to these jobs to skip when a Git tag event is present (e.g., check
github.ref for refs/tags), or (C) remove the push of the :latest tag here and
keep it only in release.yml—apply the chosen change to both the API and Web
docker/build-push-action steps so tags do not collide.

In `@apps/web/src/routes/scan/`+page.svelte:
- Around line 2-3: Remove the unused imports causing the linter failure: delete
onMount and scan from the import statements (leave type Finding if it’s still
referenced), i.e., update the Svelte module import so it only imports symbols
actually used in the component (remove the unused onMount from "svelte" import
and remove scan from "$lib/api" import) to satisfy the linter and fix CI.

In `@apps/web/src/routes/settings/cve/`+page.svelte:
- Around line 36-53: The UI is showing a hardcoded "Connected" badge, non-zero
"Synced CVEs" and progress bar even when no feed health data is loaded; update
the component to conditionally render real feed health metrics (or a clear
placeholder) instead of hardcoded values: read the NVD feed state provided by
the page load/store (e.g., a feedHealth or nvdStatus object) and only render the
"Connected" span, the synced-count span (currently showing "100+"), and the
progress bar div when valid data exists; otherwise show a neutral state such as
"Not connected" / "No data" or a "Placeholder" label and hide zero/positive
metrics. Apply the same conditional rendering pattern for the other card
instances (the block around the "Connected" span and the synced metrics/progress
bar, also present in the 58-72 section).

In `@crates/server/src/cve_sync/mod.rs`:
- Around line 130-167: The sync_osv function currently inserts placeholder rows
(sample_id like "OSV-...-SAMPLE-001") into the vulnerabilities table when an
ecosystem is empty, which prevents real OSV data from ever backfilling and
pollutes downstream logic; remove that behavior and instead only perform a
connectivity/status check or mark any non-production rows clearly so they don't
block real data. Specifically, in sync_osv replace the block that constructs and
INSERTs sample_id/OSV-*-SAMPLE-001 and the COUNT(*) guard with either (a) no
INSERT at all and just log the ecosystem connectivity status, or (b) if you must
keep example data for demos, write them to a separate demo table or set a
deterministic flag/column (e.g., is_demo=true or data_source="osv-demo") so they
are distinguishable and will not prevent real OSV syncs from
replacing/backfilling real advisories; update code references around sync_osv,
sample_id, and the SQL INSERT/COUNT logic accordingly.
- Around line 48-173: Add a #[cfg(test)] mod at the bottom that provides async
tests for sync_nvd, sync_osv and the parsing fallbacks: write tokio::test cases
that spin up a lightweight HTTP mock server (e.g., httpmock/httptest) to serve
NVD JSON with missing descriptions and missing cvss_v31 to verify the
description default and severity fallback logic in sync_nvd, and serve OSV-like
responses or call sync_osv to validate the initial-insert path; use a temporary
test database (in-memory or a test Postgres instance via sqlx) to assert rows
inserted/updated by sync_nvd/sync_osv (check cve_id, description, severity,
data_source, ecosystem), and include setup/teardown so tests are hermetic; place
the module at the file bottom inside #[cfg(test)] and reference sync_nvd,
sync_osv, and sync_all in the tests.

---

Duplicate comments:
In `@crates/server/src/cve_sync/mod.rs`:
- Around line 66-88: The code only fetches a single page because params is fixed
to resultsPerPage=100 and there is no pagination loop; modify the logic around
params/url/request/response/NvdResponse to implement pagination: add a
startIndex (or equivalent offset) param and wrap the request/send/json steps in
a loop that updates startIndex each iteration, accumulating CVE items from each
NvdResponse; stop when you've fetched all results (use totalResults from
NvdResponse or when returned items length is zero or less than resultsPerPage)
and handle rate limits between requests as needed.

---

Nitpick comments:
In `@apps/web/src/routes/scan/`+page.svelte:
- Around line 83-87: The SSE onerror handler currently only logs, sets
isScanning = false and closes eventSource; update it to also set scanStatus to
an error message (similar to the existing case 'error' branch) so users see a
descriptive failure state; modify the eventSource.onerror callback to assign
scanStatus = "SSE connection failed" (or reuse the same error text used in the
'error' case) before setting isScanning = false and closing eventSource.
- Around line 36-56: Replace the direct fetch and EventSource usage in
+page.svelte with the typed API client functions: add/export BASE_URL, the
ScanRequest type, a startScan(req: ScanRequest): Promise<string> that POSTs to
`${BASE_URL}/api/v1/scan` and returns scan_id, and a getScanEventsUrl(scanId:
string): string that builds `${BASE_URL}/api/v1/scan/${scanId}/events`; then in
+page.svelte call startScan(...) to obtain scan_id and use
getScanEventsUrl(scan_id) to create the EventSource, removing the inline fetch
and URL construction and ensuring the ai_config shape matches the ScanRequest
type.
- Around line 58-81: Validate SSE payloads before use: create a zod
discriminated union (e.g., ScanEventSchema) that covers the four event types
(progress with {percentage:number,message:string}, finding with data matching
FindingSchema, complete (unknown), and error (string)), export it from a shared
schema file (api.ts) and in the eventSource.onmessage handler call
ScanEventSchema.safeParse(JSON.parse(event.data)); if parsing fails, log the
parsed.error and return early, otherwise use parsed.data in the existing switch
logic (referencing eventSource.onmessage and FindingSchema to locate where to
wire the schema).
- Around line 107-298: This file is over the 200-line limit; extract the live
progress panel (the block using scanProgress and scanStatus with the circular
and linear progress visuals) into a new ProgressPanel component and extract the
finding card (the repeated {`#each` findings as finding ...} block that references
finding, getSeverityColor, and displays
description/explanation/vulnerable_code/fixed_code) into a new FindingCard
component; update +page.svelte to import and render <ProgressPanel
{scanProgress} {scanStatus} {isScanning}/> where the original progress block was
and replace the finding card markup with <FindingCard {finding}
getSeverityColor={getSeverityColor}/> (or pass only needed props), ensure
FindingCard accepts props for finding and uses getSeverityColor or a mapped
color prop, move any internal logic/conditionals (e.g., description !==
explanation) into the new component, and keep all styling/classes intact so
behavior and appearance remain unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7f68bfa3-04d8-490b-973c-a57ad78aea1c

📥 Commits

Reviewing files that changed from the base of the PR and between fe140bf and 9f9ce16.

📒 Files selected for processing (9)
  • .github/workflows/ci.yml
  • apps/web/src/lib/stores/aiConfig.svelte.ts
  • apps/web/src/routes/scan/+page.svelte
  • apps/web/src/routes/settings/cve/+page.svelte
  • crates/scanner/src/finding.rs
  • crates/scanner/src/lib.rs
  • crates/server/src/cve_sync/mod.rs
  • crates/server/src/main.rs
  • migrations/202604060001_add_osv_fields.sql
✅ Files skipped from review due to trivial changes (1)
  • migrations/202604060001_add_osv_fields.sql
🚧 Files skipped from review as they are similar to previous changes (3)
  • crates/scanner/src/finding.rs
  • crates/scanner/src/lib.rs
  • crates/server/src/main.rs

Comment on lines +156 to +173
- name: Build and push API (latest)
uses: docker/build-push-action@v5
with:
context: .
file: crates/server/Dockerfile
push: true
tags: ghcr.io/${{ github.repository }}-api:latest
cache-from: type=gha
cache-to: type=gha,mode=max

- name: Build and push Web (latest)
uses: docker/build-push-action@v5
with:
context: ./apps/web
push: true
tags: ghcr.io/${{ github.repository }}-web:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential :latest tag conflict with release.yml.

Both this deploy job and the docker-publish job in release.yml push :latest tags to the same image names (ghcr.io/${{ github.repository }}-api:latest and -web:latest). When a release tag is pushed to main, both workflows may run concurrently, causing race conditions on the :latest tag.

Consider one of these approaches:

  1. Use a different tag here (e.g., :main or :edge) to distinguish CI builds from releases
  2. Add a condition to skip this job when a version tag is present
  3. Remove the :latest tag push from one of the workflows
Option 1: Use distinct tag for CI builds
      - name: Build and push API (latest)
        uses: docker/build-push-action@v5
        with:
          context: .
          file: crates/server/Dockerfile
          push: true
-          tags: ghcr.io/${{ github.repository }}-api:latest
+          tags: ghcr.io/${{ github.repository }}-api:edge
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Build and push Web (latest)
        uses: docker/build-push-action@v5
        with:
          context: ./apps/web
          push: true
-          tags: ghcr.io/${{ github.repository }}-web:latest
+          tags: ghcr.io/${{ github.repository }}-web:edge
          cache-from: type=gha
          cache-to: type=gha,mode=max
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 156 - 173, The CI workflow's Docker
steps "Build and push API (latest)" and "Build and push Web (latest)" push
:latest tags that conflict with the release.yml docker-publish job; update these
steps to avoid the race by either (A) changing the tags from ghcr.io/${{
github.repository }}-api:latest and -web:latest to a CI-specific tag like :main
or :edge, or (B) adding a condition to these jobs to skip when a Git tag event
is present (e.g., check github.ref for refs/tags), or (C) remove the push of the
:latest tag here and keep it only in release.yml—apply the chosen change to both
the API and Web docker/build-push-action steps so tags do not collide.

Comment on lines +36 to +53
<div class="glass p-8 rounded-3xl border-zinc-800 relative overflow-hidden group">
<div class="flex items-start justify-between mb-6">
<div class="w-12 h-12 rounded-2xl bg-brand-primary/10 flex items-center justify-center text-brand-primary">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M12 22s8-4 8-10V5l-8-3-8 3v7c0 6 8 10 8 10"/></svg>
</div>
<span class="text-[10px] font-bold bg-emerald-500/10 text-emerald-500 px-2 py-1 rounded tracking-widest uppercase">Connected</span>
</div>
<h3 class="text-xl font-bold mb-2">NVD Data Feed</h3>
<p class="text-sm text-zinc-500 leading-relaxed max-w-[240px]">Synchronizing with the National Vulnerability Database via API v2.</p>

<div class="mt-8 space-y-3">
<div class="flex justify-between text-xs text-zinc-500">
<span>Synced CVEs</span>
<span class="text-zinc-300 font-bold tracking-widest">100+</span>
</div>
<div class="w-full bg-zinc-900 h-1.5 rounded-full overflow-hidden">
<div class="bg-brand-primary w-2/5 h-full rounded-full"></div>
</div>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't present placeholder feed health as real status.

These cards always render Connected plus non-zero coverage even though this page never loads any actual feed health or sync metrics. On a security settings page that creates false confidence after failed or never-run syncs; either back them with real API data or clearly label/hide them as placeholders for now.

Also applies to: 58-72

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/routes/settings/cve/`+page.svelte around lines 36 - 53, The UI
is showing a hardcoded "Connected" badge, non-zero "Synced CVEs" and progress
bar even when no feed health data is loaded; update the component to
conditionally render real feed health metrics (or a clear placeholder) instead
of hardcoded values: read the NVD feed state provided by the page load/store
(e.g., a feedHealth or nvdStatus object) and only render the "Connected" span,
the synced-count span (currently showing "100+"), and the progress bar div when
valid data exists; otherwise show a neutral state such as "Not connected" / "No
data" or a "Placeholder" label and hide zero/positive metrics. Apply the same
conditional rendering pattern for the other card instances (the block around the
"Connected" span and the synced metrics/progress bar, also present in the 58-72
section).

Comment on lines +48 to +173
/// Sync all vulnerability data sources.
pub async fn sync_all(pool: &Pool<Postgres>) -> anyhow::Result<()> {
info!("Starting full CVE synchronization...");

let client = Client::new();
sync_nvd(pool, &client).await?;
sync_osv(pool, &client).await?;

info!("CVE synchronization completed successfully.");
Ok(())
}

async fn sync_nvd(pool: &Pool<Postgres>, client: &Client) -> anyhow::Result<()> {
let api_key = env::var("NVD_API_KEY").ok();
if api_key.is_none() {
info!("NVD_API_KEY not set. Running in rate-limited mode.");
}

let params = vec![("resultsPerPage", "100".to_string())];
let url = reqwest::Url::parse_with_params("https://services.nvd.nist.gov/rest/json/cves/2.0", &params)?;

info!("Calling NVD API: {}", url);

let mut request = client
.get(url)
.header("User-Agent", "Zenvra-Scanner/0.1.0");

if let Some(key) = api_key {
request = request.header("apiKey", key);
}

let response: reqwest::Response = request.send().await?;

if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_else(|_| "Empty body".to_string());
error!("NVD API error (Status: {}): {}", status, body);
anyhow::bail!("NVD API returned error status: {}", status);
}

let nvd_data = response.json::<NvdResponse>().await?;

for item in nvd_data.vulnerabilities {
let cve = item.cve;
let id = cve.id;
let description = cve
.descriptions
.first()
.map(|d| d.value.clone())
.unwrap_or_default();
let severity = cve
.metrics
.and_then(|m| m.cvss_v31)
.and_then(|v: Vec<CvssMetricV31>| {
v.first()
.map(|c| c.cvss_data.base_severity.to_lowercase())
})
.unwrap_or_else(|| "medium".to_string());

sqlx::query(
r#"
INSERT INTO vulnerabilities (cve_id, title, description, severity, data_source)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (cve_id) DO UPDATE SET
description = EXCLUDED.description,
severity = EXCLUDED.severity,
updated_at = CURRENT_TIMESTAMP
"#
)
.bind(&id)
.bind(format!("Vulnerability {}", id))
.bind(&description)
.bind(&severity)
.bind("nvd")
.execute(pool)
.await?;
}

info!("NVD sync completed.");
Ok(())
}

async fn sync_osv(pool: &Pool<Postgres>, _client: &Client) -> anyhow::Result<()> {
info!("Starting OSV synchronization for popular ecosystems...");

let ecosystems = vec!["npm", "PyPI", "Go", "crates.io"];

for ecosystem in ecosystems {
info!("Fetching recent vulnerabilities for ecosystem: {}", ecosystem);

// In a real implementation, we would fetch the list of affected packages or use the GS storage.
// For this MVP, we fetch a few well-known recent vulnerability reports to demonstrate the platform's capability.
// We simulate this by querying the OSV API with a common vulnerable package example if we had one.
// Instead, we will implement a basic "Status: Online" for now by just checking connectivity,
// and inserting a few sample records if the DB is empty for that ecosystem.

let count: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM vulnerabilities WHERE data_source = 'osv' AND ecosystem = $1")
.bind(ecosystem)
.fetch_one(pool)
.await?;

if count.0 == 0 {
info!("Populating initial OSV data for {}", ecosystem);
let sample_id = format!("OSV-{}-SAMPLE-001", ecosystem.to_uppercase());
sqlx::query(
r#"
INSERT INTO vulnerabilities (cve_id, title, description, severity, data_source, ecosystem, package_name)
VALUES ($1, $2, $3, $4, $5, $6, $7)
ON CONFLICT (cve_id) DO NOTHING
"#
)
.bind(&sample_id)
.bind(format!("Sample Vulnerability in {}", ecosystem))
.bind(format!("Automatically monitored advisory for {} packages. More details will be fetched during deep scans.", ecosystem))
.bind("medium")
.bind("osv")
.bind(ecosystem)
.bind("sample-package")
.execute(pool)
.await?;
}
}

info!("OSV synchronization completed.");
Ok(())
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Please add a #[cfg(test)] module for this sync logic.

This file now owns non-trivial parsing and persistence behavior but has no regression coverage. At minimum, cover the description/severity fallbacks and the NVD/OSV insert paths so upstream payload or schema changes fail fast.

As per coding guidelines, "crates/**/*.rs: Write tests in #[cfg(test)] modules at the bottom of each file".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/cve_sync/mod.rs` around lines 48 - 173, Add a #[cfg(test)]
mod at the bottom that provides async tests for sync_nvd, sync_osv and the
parsing fallbacks: write tokio::test cases that spin up a lightweight HTTP mock
server (e.g., httpmock/httptest) to serve NVD JSON with missing descriptions and
missing cvss_v31 to verify the description default and severity fallback logic
in sync_nvd, and serve OSV-like responses or call sync_osv to validate the
initial-insert path; use a temporary test database (in-memory or a test Postgres
instance via sqlx) to assert rows inserted/updated by sync_nvd/sync_osv (check
cve_id, description, severity, data_source, ecosystem), and include
setup/teardown so tests are hermetic; place the module at the file bottom inside
#[cfg(test)] and reference sync_nvd, sync_osv, and sync_all in the tests.

Comment on lines +130 to +167
async fn sync_osv(pool: &Pool<Postgres>, _client: &Client) -> anyhow::Result<()> {
info!("Starting OSV synchronization for popular ecosystems...");

let ecosystems = vec!["npm", "PyPI", "Go", "crates.io"];

for ecosystem in ecosystems {
info!("Fetching recent vulnerabilities for ecosystem: {}", ecosystem);

// In a real implementation, we would fetch the list of affected packages or use the GS storage.
// For this MVP, we fetch a few well-known recent vulnerability reports to demonstrate the platform's capability.
// We simulate this by querying the OSV API with a common vulnerable package example if we had one.
// Instead, we will implement a basic "Status: Online" for now by just checking connectivity,
// and inserting a few sample records if the DB is empty for that ecosystem.

let count: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM vulnerabilities WHERE data_source = 'osv' AND ecosystem = $1")
.bind(ecosystem)
.fetch_one(pool)
.await?;

if count.0 == 0 {
info!("Populating initial OSV data for {}", ecosystem);
let sample_id = format!("OSV-{}-SAMPLE-001", ecosystem.to_uppercase());
sqlx::query(
r#"
INSERT INTO vulnerabilities (cve_id, title, description, severity, data_source, ecosystem, package_name)
VALUES ($1, $2, $3, $4, $5, $6, $7)
ON CONFLICT (cve_id) DO NOTHING
"#
)
.bind(&sample_id)
.bind(format!("Sample Vulnerability in {}", ecosystem))
.bind(format!("Automatically monitored advisory for {} packages. More details will be fetched during deep scans.", ecosystem))
.bind("medium")
.bind("osv")
.bind(ecosystem)
.bind("sample-package")
.execute(pool)
.await?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't store placeholder rows as synced OSV data.

This function never calls OSV; when an ecosystem is empty it inserts OSV-*-SAMPLE-001 rows into vulnerabilities instead. Those records are indistinguishable from real advisories to downstream lookup/history code, and the count == 0 guard means a later real sync will never backfill that ecosystem once the sample row exists.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/cve_sync/mod.rs` around lines 130 - 167, The sync_osv
function currently inserts placeholder rows (sample_id like
"OSV-...-SAMPLE-001") into the vulnerabilities table when an ecosystem is empty,
which prevents real OSV data from ever backfilling and pollutes downstream
logic; remove that behavior and instead only perform a connectivity/status check
or mark any non-production rows clearly so they don't block real data.
Specifically, in sync_osv replace the block that constructs and INSERTs
sample_id/OSV-*-SAMPLE-001 and the COUNT(*) guard with either (a) no INSERT at
all and just log the ecosystem connectivity status, or (b) if you must keep
example data for demos, write them to a separate demo table or set a
deterministic flag/column (e.g., is_demo=true or data_source="osv-demo") so they
are distinguishable and will not prevent real OSV syncs from
replacing/backfilling real advisories; update code references around sync_osv,
sample_id, and the SQL INSERT/COUNT logic accordingly.

@chojuninengu chojuninengu self-assigned this Apr 6, 2026
@chojuninengu chojuninengu added enhancement New feature or request good first issue Good for newcomers labels Apr 6, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/cli/src/main.rs (1)

46-48: ⚠️ Potential issue | 🟡 Minor

Inconsistent default model name in help text vs. code.

The --ai_model help text on line 46 references claude-sonnet-4-20250514, but the actual default for Anthropic on line 353 is claude-3-5-sonnet-20240620. Update one to match the other for consistency.

Also applies to: 352-354

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` around lines 46 - 48, The help text for the CLI
option ai_model (the ai_model field annotated with #[arg(long)]) is inconsistent
with the actual Anthropic default model used elsewhere (the default string set
for Anthropic models, e.g., "claude-3-5-sonnet-20240620"); update either the
ai_model help text to mention the real default ("claude-3-5-sonnet-20240620") or
change the Anthropic default constant/value to match the help text
("claude-sonnet-4-20250514") so both the #[arg(long)] ai_model description and
the Anthropic default value reference the same model name. Ensure the same
change is applied wherever the Anthropic default model string is defined/used
(the default assignment for Anthropic model).
🧹 Nitpick comments (5)
extensions/vscode/src/sidebarProvider.ts (1)

46-50: Consider typing the message parameter for better type safety.

The postMessage method accepts unknown, but based on the extension's usage, the messages have specific shapes ({ type: 'progress' | 'finding' | 'complete', data?: unknown }).

💡 Optional: Add message type
+type SidebarMessage =
+  | { type: 'progress'; data: { percentage: number; message: string } }
+  | { type: 'finding'; data: unknown }
+  | { type: 'complete' };

-  public postMessage(message: unknown) {
+  public postMessage(message: SidebarMessage) {
     if (this._view) {
       this._view.webview.postMessage(message);
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/sidebarProvider.ts` around lines 46 - 50, The
postMessage method currently accepts unknown which reduces type safety; update
the signature of postMessage to accept a discriminated union matching the
extension's message shapes (e.g., an interface/union for { type: 'progress' |
'finding' | 'complete', data?: unknown }) and use that type for the message
parameter, then pass it through to this._view.webview.postMessage(message)
unchanged; modify any callers to conform if necessary and add the new Message
type (or a namespaced alias) near the SidebarProvider class so references like
postMessage and this._view.webview.postMessage are clearly typed.
crates/server/src/main.rs (2)

102-105: Permissive CORS configuration may expose API to unintended origins.

CorsLayer::allow_origin(Any) allows requests from any origin. While acceptable for local development, this should be restricted in production to prevent unauthorized cross-origin access to the scan API.

💡 Suggested improvement for production
+    let allowed_origins = std::env::var("CORS_ORIGINS")
+        .map(|origins| origins.split(',').map(|s| s.trim().to_string()).collect::<Vec<_>>())
+        .ok();
+
     let cors = CorsLayer::new()
-        .allow_origin(Any)
+        .allow_origin(match allowed_origins {
+            Some(origins) => origins.iter().filter_map(|o| o.parse().ok()).collect::<Vec<_>>().into(),
+            None => Any.into(),
+        })
         .allow_methods(Any)
         .allow_headers(Any);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 102 - 105, The CORS setup uses
CorsLayer::new().allow_origin(Any) which is too permissive for production;
update the cors construction (variable cors) to read allowed origins from
configuration or an environment variable and pass a restricted origin list
instead of Any (e.g., build a HeaderValue/Origin list or use Any only in dev),
and apply allow_methods/allow_headers as needed; locate the CorsLayer::new()
call and replace allow_origin(Any) with logic that parses a configured
ALLOWED_ORIGINS and constructs the appropriate allow_origin value for
production.

200-212: CVE enrichment overwrites scanner-generated title unconditionally.

When a CVE match is found, the original finding.title from the scanner is replaced with the CVE database title. If the scanner's title contains more context (e.g., code-specific details), this information is lost.

Consider appending or preserving both:

💡 Optional: Preserve both titles
                     if let Ok(Some(row)) = sqlx::query(
-                        "SELECT title, description FROM vulnerabilities WHERE cve_id = $1",
+                        "SELECT title FROM vulnerabilities WHERE cve_id = $1",
                     )
                     .bind(cve_id)
                     .fetch_optional(&state_task.db)
                     .await
                     {
                         use sqlx::Row;
-                        finding.title = row.get("title");
+                        let cve_title: String = row.get("title");
+                        if !cve_title.is_empty() && cve_title != finding.title {
+                            finding.title = format!("{} ({})", finding.title, cve_title);
+                        }
                     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 200 - 212, The enrichment code
unconditionally replaces the scanner-generated finding.title when a CVE row is
found; change the logic in the CVE enrichment block (the section that checks
finding.cve_id, runs sqlx::query and uses row.get("title")) so it preserves the
original finding.title and merges or appends the DB title instead of overwriting
it (for example, set finding.title to a combination like "{scanner_title} —
{cve_title}" or only append if scanner_title is non-empty); ensure you still
read row.get("title") and only modify finding.title when appropriate, keeping
references to finding, finding.title, cve_id, row.get("title"), and
state_task.db to locate the code.
extensions/vscode/src/extension.ts (2)

95-99: Avoid any cast for ReadableStream reader.

The (body as any).getReader() cast triggers a lint warning and bypasses type checking. Consider using a more specific type assertion:

💡 Proposed fix
     const body = sseResponse.body;
     if (!body) throw new Error('Failed to connect to event stream');

-    const reader = (body as any).getReader();
+    const reader = (body as ReadableStream<Uint8Array>).getReader();
     const decoder = new TextDecoder();

This preserves type safety while acknowledging the web streams API.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 95 - 99, The code casts
sseResponse.body to any to call getReader(), which silences type checks; instead
assert a concrete Web Streams type and call getReader() with proper typing.
Replace the any cast by treating sseResponse.body as a
ReadableStream<Uint8Array> (or ReadableStream<Uint8Array> | null-guard) before
calling getReader() so the reader variable is typed correctly; update the null
check on body and use that typed stream for (body as
ReadableStream<Uint8Array>).getReader() (or narrow body first) so the Web
Streams API is used with type safety in extension.ts around the reader/decoder
logic.

153-156: Diagnostic range doesn't highlight the vulnerable code.

The range (line, 0, line, 0) creates a zero-width marker that won't visually highlight the problematic code. Consider using line_end from the finding to create a proper range:

💡 Suggested improvement
     const line = Math.max(0, f.line_start - 1);
-    const range = new vscode.Range(line, 0, line, 0); // TODO: improve range mapping
+    const endLine = Math.max(line, f.line_end - 1);
+    const lineText = document.lineAt(line).text;
+    const range = new vscode.Range(line, 0, endLine, lineText.length);

Would you like me to help implement a more accurate range mapping that considers the actual code span?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 153 - 156, The current range
uses a zero-width marker (const range = new vscode.Range(line, 0, line, 0));
update the mapping to use the finding's actual span: derive startLine =
Math.max(0, f.line_start - 1) and endLine = Math.max(startLine, f.line_end - 1),
and use column offsets from f.col_start/f.col_end (falling back to 0 and
Number.MAX_SAFE_INTEGER or the document line length if absent) to construct new
vscode.Range(startLine, startCol, endLine, endCol) so the diagnostic highlights
the full vulnerable span (replace usage of line, range with this new range
calculation).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/cli/src/main.rs`:
- Around line 185-195: The Show branch (ConfigAction::Show) currently serializes
and prints the full config (variable config) via serde_json::to_string_pretty,
leaking secrets like ai_api_key; before printing, create a redacted copy or
mapping of the config where sensitive fields (e.g., ai_api_key) are replaced
with a masked value such as "<redacted>" or a partial mask, then serialize and
print that redacted structure (reference symbols: ConfigAction::Show, config,
ZenvraConfig::get_path, serde_json::to_string_pretty) so the path still shows
but secret contents are not exposed.

In `@crates/scanner/src/lib.rs`:
- Around line 94-113: The spawned background task calling scan_stream may exit
without sending ScanEvent::Complete or ScanEvent::Error, causing the receiver
loop in scan to silently stop without signaling failure; update the tokio::spawn
block that calls scan_stream(config_clone, tx) so it captures the Result
returned by scan_stream and always sends an explicit final event over the tx
before exiting: if scan_stream returns Err send ScanEvent::Error with the error
(using the same error message type currently used by ScanEvent::Error), and if
it returns Ok ensure ScanEvent::Complete is sent (only if not already sent by
scan_stream), so the receiver in scan (which matches on
ScanEvent::{Finding,Complete,Error}) always receives a terminal event instead of
silently losing the sender.

In `@extensions/vscode/src/sidebarProvider.ts`:
- Around line 197-206: The code in the 'finding' case uses item.innerHTML to
inject message.data.title and message.data.severity into the DOM (creating
sevClass from severity), which allows XSS; instead, build DOM nodes and set
textContent for user-supplied values and only use a safe/sanitized class name
for severity. Replace the innerHTML usage in the 'finding' case by creating span
elements (e.g., severitySpan and titleStrong) and append them to item using
severitySpan.textContent = message.data.severity and titleStrong.textContent =
message.data.title; compute sevClass by sanitizing message.data.severity (e.g.,
strip/replace non-alphanumeric characters or map known severities) before
assigning item.className or severitySpan.className, and then append item to
findingsList.

---

Outside diff comments:
In `@crates/cli/src/main.rs`:
- Around line 46-48: The help text for the CLI option ai_model (the ai_model
field annotated with #[arg(long)]) is inconsistent with the actual Anthropic
default model used elsewhere (the default string set for Anthropic models, e.g.,
"claude-3-5-sonnet-20240620"); update either the ai_model help text to mention
the real default ("claude-3-5-sonnet-20240620") or change the Anthropic default
constant/value to match the help text ("claude-sonnet-4-20250514") so both the
#[arg(long)] ai_model description and the Anthropic default value reference the
same model name. Ensure the same change is applied wherever the Anthropic
default model string is defined/used (the default assignment for Anthropic
model).

---

Nitpick comments:
In `@crates/server/src/main.rs`:
- Around line 102-105: The CORS setup uses CorsLayer::new().allow_origin(Any)
which is too permissive for production; update the cors construction (variable
cors) to read allowed origins from configuration or an environment variable and
pass a restricted origin list instead of Any (e.g., build a HeaderValue/Origin
list or use Any only in dev), and apply allow_methods/allow_headers as needed;
locate the CorsLayer::new() call and replace allow_origin(Any) with logic that
parses a configured ALLOWED_ORIGINS and constructs the appropriate allow_origin
value for production.
- Around line 200-212: The enrichment code unconditionally replaces the
scanner-generated finding.title when a CVE row is found; change the logic in the
CVE enrichment block (the section that checks finding.cve_id, runs sqlx::query
and uses row.get("title")) so it preserves the original finding.title and merges
or appends the DB title instead of overwriting it (for example, set
finding.title to a combination like "{scanner_title} — {cve_title}" or only
append if scanner_title is non-empty); ensure you still read row.get("title")
and only modify finding.title when appropriate, keeping references to finding,
finding.title, cve_id, row.get("title"), and state_task.db to locate the code.

In `@extensions/vscode/src/extension.ts`:
- Around line 95-99: The code casts sseResponse.body to any to call getReader(),
which silences type checks; instead assert a concrete Web Streams type and call
getReader() with proper typing. Replace the any cast by treating
sseResponse.body as a ReadableStream<Uint8Array> (or ReadableStream<Uint8Array>
| null-guard) before calling getReader() so the reader variable is typed
correctly; update the null check on body and use that typed stream for (body as
ReadableStream<Uint8Array>).getReader() (or narrow body first) so the Web
Streams API is used with type safety in extension.ts around the reader/decoder
logic.
- Around line 153-156: The current range uses a zero-width marker (const range =
new vscode.Range(line, 0, line, 0)); update the mapping to use the finding's
actual span: derive startLine = Math.max(0, f.line_start - 1) and endLine =
Math.max(startLine, f.line_end - 1), and use column offsets from
f.col_start/f.col_end (falling back to 0 and Number.MAX_SAFE_INTEGER or the
document line length if absent) to construct new vscode.Range(startLine,
startCol, endLine, endCol) so the diagnostic highlights the full vulnerable span
(replace usage of line, range with this new range calculation).

In `@extensions/vscode/src/sidebarProvider.ts`:
- Around line 46-50: The postMessage method currently accepts unknown which
reduces type safety; update the signature of postMessage to accept a
discriminated union matching the extension's message shapes (e.g., an
interface/union for { type: 'progress' | 'finding' | 'complete', data?: unknown
}) and use that type for the message parameter, then pass it through to
this._view.webview.postMessage(message) unchanged; modify any callers to conform
if necessary and add the new Message type (or a namespaced alias) near the
SidebarProvider class so references like postMessage and
this._view.webview.postMessage are clearly typed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5ae14d28-59df-42a4-8461-c5881ac6406f

📥 Commits

Reviewing files that changed from the base of the PR and between 9f9ce16 and c9ecbb2.

📒 Files selected for processing (16)
  • apps/web/eslint.config.js
  • apps/web/src/routes/history/+page.svelte
  • apps/web/src/routes/scan/+page.svelte
  • apps/web/src/routes/settings/ai/+page.svelte
  • apps/web/src/routes/settings/cve/+page.svelte
  • crates/cli/src/main.rs
  • crates/scanner/src/ai/mod.rs
  • crates/scanner/src/ai/openai.rs
  • crates/scanner/src/engine.rs
  • crates/scanner/src/lib.rs
  • crates/server/src/cve_sync/mod.rs
  • crates/server/src/main.rs
  • extensions/vscode/package.json
  • extensions/vscode/src/extension.ts
  • extensions/vscode/src/sidebarProvider.ts
  • migrations/202604040001_create_vulnerabilities_table.sql
🚧 Files skipped from review as they are similar to previous changes (8)
  • crates/scanner/src/ai/openai.rs
  • crates/scanner/src/ai/mod.rs
  • apps/web/src/routes/settings/ai/+page.svelte
  • migrations/202604040001_create_vulnerabilities_table.sql
  • crates/server/src/cve_sync/mod.rs
  • crates/scanner/src/engine.rs
  • apps/web/src/routes/scan/+page.svelte
  • extensions/vscode/package.json

Comment on lines +185 to +195
ConfigAction::Show => {
use colored::Colorize;
println!("{}", "Zenvra CLI Configuration:".bold());
println!(
" Path: {}",
ZenvraConfig::get_path().display().to_string().dimmed()
);
println!();
let json = serde_json::to_string_pretty(&config)?;
println!("{}", json);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

config show leaks API keys in plaintext.

The Show action serializes the entire config including ai_api_key and prints it to the terminal. This exposes secrets in terminal history and logs, similar to the issue previously flagged for the Set action.

🔒 Proposed fix to redact sensitive fields when displaying config
         ConfigAction::Show => {
             use colored::Colorize;
             println!("{}", "Zenvra CLI Configuration:".bold());
             println!(
                 "   Path: {}",
                 ZenvraConfig::get_path().display().to_string().dimmed()
             );
             println!();
-            let json = serde_json::to_string_pretty(&config)?;
-            println!("{}", json);
+            // Create a redacted copy for display
+            let display_config = serde_json::json!({
+                "ai_provider": config.ai_provider,
+                "ai_api_key": config.ai_api_key.as_ref().map(|_| "********"),
+                "ai_model": config.ai_model,
+                "ai_endpoint": config.ai_endpoint,
+            });
+            let json = serde_json::to_string_pretty(&display_config)?;
+            println!("{}", json);
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ConfigAction::Show => {
use colored::Colorize;
println!("{}", "Zenvra CLI Configuration:".bold());
println!(
" Path: {}",
ZenvraConfig::get_path().display().to_string().dimmed()
);
println!();
let json = serde_json::to_string_pretty(&config)?;
println!("{}", json);
}
ConfigAction::Show => {
use colored::Colorize;
println!("{}", "Zenvra CLI Configuration:".bold());
println!(
" Path: {}",
ZenvraConfig::get_path().display().to_string().dimmed()
);
println!();
// Create a redacted copy for display
let display_config = serde_json::json!({
"ai_provider": config.ai_provider,
"ai_api_key": config.ai_api_key.as_ref().map(|_| "********"),
"ai_model": config.ai_model,
"ai_endpoint": config.ai_endpoint,
});
let json = serde_json::to_string_pretty(&display_config)?;
println!("{}", json);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` around lines 185 - 195, The Show branch
(ConfigAction::Show) currently serializes and prints the full config (variable
config) via serde_json::to_string_pretty, leaking secrets like ai_api_key;
before printing, create a redacted copy or mapping of the config where sensitive
fields (e.g., ai_api_key) are replaced with a masked value such as "<redacted>"
or a partial mask, then serialize and print that redacted structure (reference
symbols: ConfigAction::Show, config, ZenvraConfig::get_path,
serde_json::to_string_pretty) so the path still shows but secret contents are
not exposed.

Comment on lines +94 to 113
pub async fn scan(config: &ScanConfig) -> anyhow::Result<Vec<Finding>> {
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel();
let config_clone = config.clone();

// Run scan in background and collect findings
tokio::spawn(async move {
let _ = scan_stream(config_clone, tx).await;
});

let mut findings = Vec::new();
while let Some(event) = rx.recv().await {
match event {
ScanEvent::Finding(f) => findings.push(*f),
ScanEvent::Complete => break,
ScanEvent::Error(e) => return Err(anyhow::anyhow!(e)),
_ => {}
}
}

Ok(findings)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential hang if scan_stream fails before sending Complete or Error.

If scan_stream returns early due to an error before sending ScanEvent::Complete or ScanEvent::Error, the spawned task drops the sender tx. The while let Some(event) = rx.recv().await loop will then return None and exit cleanly, but only because the channel closes. However, if an error occurs after sending some findings but before Complete, the caller won't know the scan terminated abnormally.

Consider ensuring that scan errors are always propagated via the channel:

💡 Suggested improvement
     tokio::spawn(async move {
-        let _ = scan_stream(config_clone, tx).await;
+        if let Err(e) = scan_stream(config_clone, tx.clone()).await {
+            let _ = tx.send(ScanEvent::Error(e.to_string()));
+        }
     });

Note: scan_stream already sends ScanEvent::Error on engine::run_stream failure (line 50), but errors from AI provider enrichment (lines 59-72) don't send an error event if they cause early termination.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/lib.rs` around lines 94 - 113, The spawned background task
calling scan_stream may exit without sending ScanEvent::Complete or
ScanEvent::Error, causing the receiver loop in scan to silently stop without
signaling failure; update the tokio::spawn block that calls
scan_stream(config_clone, tx) so it captures the Result returned by scan_stream
and always sends an explicit final event over the tx before exiting: if
scan_stream returns Err send ScanEvent::Error with the error (using the same
error message type currently used by ScanEvent::Error), and if it returns Ok
ensure ScanEvent::Complete is sent (only if not already sent by scan_stream), so
the receiver in scan (which matches on ScanEvent::{Finding,Complete,Error})
always receives a terminal event instead of silently losing the sender.

Comment on lines +197 to +206
case 'finding':
const item = document.createElement('div');
item.className = 'finding-item';
const sevClass = 'severity-' + message.data.severity.toLowerCase();
item.innerHTML = \`
<span class="finding-severity \${sevClass}">\${message.data.severity}</span>
<strong>\${message.data.title}</strong>
\`;
findingsList.appendChild(item);
break;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential XSS via innerHTML with finding data.

The message.data.title and message.data.severity are inserted into the DOM via innerHTML without escaping. If a finding title contains HTML/script content, it could execute in the webview context.

🔒 Proposed fix using textContent
                             case 'finding':
-                                const item = document.createElement('div');
-                                item.className = 'finding-item';
-                                const sevClass = 'severity-' + message.data.severity.toLowerCase();
-                                item.innerHTML = \`
-                                    <span class="finding-severity \${sevClass}">\${message.data.severity}</span>
-                                    <strong>\${message.data.title}</strong>
-                                \`;
+                                const item = document.createElement('div');
+                                item.className = 'finding-item';
+                                const sevClass = 'severity-' + message.data.severity.toLowerCase();
+                                const sevSpan = document.createElement('span');
+                                sevSpan.className = 'finding-severity ' + sevClass;
+                                sevSpan.textContent = message.data.severity;
+                                const titleEl = document.createElement('strong');
+                                titleEl.textContent = message.data.title;
+                                item.appendChild(sevSpan);
+                                item.appendChild(titleEl);
                                 findingsList.appendChild(item);
                                 break;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/sidebarProvider.ts` around lines 197 - 206, The code in
the 'finding' case uses item.innerHTML to inject message.data.title and
message.data.severity into the DOM (creating sevClass from severity), which
allows XSS; instead, build DOM nodes and set textContent for user-supplied
values and only use a safe/sanitized class name for severity. Replace the
innerHTML usage in the 'finding' case by creating span elements (e.g.,
severitySpan and titleStrong) and append them to item using
severitySpan.textContent = message.data.severity and titleStrong.textContent =
message.data.title; compute sevClass by sanitizing message.data.severity (e.g.,
strip/replace non-alphanumeric characters or map known severities) before
assigning item.className or severitySpan.className, and then append item to
findingsList.

@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Security audit

Failed stage: Rust audit [❌]

Failed test name: ""

Failure summary:

The action failed during the cargo audit step:
- cargo audit --json reported 1 security
vulnerability: advisory RUSTSEC-2023-0071 affecting crate rsa version 0.9.10 (no patched version
available), which caused the job to be marked as failed (##[error]Critical vulnerabilities were
found, marking check as failed).
- Additionally, the workflow attempted to publish the audit
results/check, but this failed with HttpError: Resource not accessible by integration, because the
Action is running from a forked repository and does not have permission to write checks/statuses
(##[warning]It seems that this Action is executed from the forked repository.).

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

129:  Downloaded atomic-waker v1.1.2
130:  Downloaded auditable-extract v0.3.5
131:  Downloaded clap v4.6.0
132:  Downloaded abscissa_derive v0.9.0
133:  Downloaded auditable-serde v0.9.0
134:  Downloaded heck v0.5.0
135:  Downloaded gix-validate v0.11.0
136:  Downloaded quote v1.0.45
137:  Downloaded matchers v0.2.0
138:  Downloaded compression-core v0.4.31
139:  Downloaded hash32 v0.3.1
140:  Downloaded num-conv v0.2.1
141:  Downloaded rand_chacha v0.9.0
142:  Downloaded sharded-slab v0.1.7
143:  Downloaded heapless v0.8.0
144:  Downloaded thiserror v2.0.18
145:  Downloaded tinystr v0.8.3
...

241:  Downloaded rustc-hash v2.1.2
242:  Downloaded regex v1.12.3
243:  Downloaded rayon-core v1.13.0
244:  Downloaded rayon v1.11.0
245:  Downloaded iri-string v0.7.12
246:  Downloaded icu_properties v2.2.0
247:  Downloaded hyper v1.9.0
248:  Downloaded gix-transport v0.53.0
249:  Downloaded gix v0.78.0
250:  Downloaded gimli v0.32.3
251:  Downloaded zerovec-derive v0.11.3
252:  Downloaded zeroize v1.8.2
253:  Downloaded writeable v0.6.3
254:  Downloaded untrusted v0.9.0
255:  Downloaded toml_datetime v0.7.5+spec-1.1.0
256:  Downloaded thiserror-impl v2.0.18
257:  Downloaded smol_str v0.3.6
...

333:  Downloaded mio v1.2.0
334:  Downloaded mime v0.3.17
335:  Downloaded hashbrown v0.17.0
336:  Downloaded gix-traverse v0.52.0
337:  Downloaded gix-packetline v0.21.2
338:  Downloaded gix-date v0.13.0
339:  Downloaded gix-bitmap v0.2.16
340:  Downloaded fs_extra v1.3.0
341:  Downloaded foldhash v0.1.5
342:  Downloaded crossbeam-utils v0.8.21
343:  Downloaded anstyle-parse v1.0.0
344:  Downloaded gix-hashtable v0.12.0
345:  Downloaded aws-lc-rs v1.16.2
346:  Downloaded adler2 v2.0.1
347:  Downloaded static_assertions v1.1.0
348:  Downloaded gix-error v0.0.0
349:  Downloaded gix-command v0.7.1
...

392:  Downloaded gix-revwalk v0.26.0
393:  Downloaded gix-filter v0.25.0
394:  Downloaded faster-hex v0.10.0
395:  Downloaded crossbeam-channel v0.5.15
396:  Downloaded cfg_aliases v0.2.1
397:  Downloaded miniz_oxide v0.8.9
398:  Downloaded indexmap v2.14.0
399:  Downloaded gix-worktree-state v0.25.0
400:  Downloaded gix-worktree v0.47.0
401:  Downloaded gix-submodule v0.25.0
402:  Downloaded gix-pathspec v0.15.1
403:  Downloaded gix-object v0.55.0
404:  Downloaded fnv v1.0.7
405:  Downloaded flate2 v1.1.9
406:  Downloaded dunce v1.0.5
407:  Downloaded display-error-chain v0.2.2
408:  Downloaded cmake v0.1.58
...

442:  Downloaded aws-lc-sys v0.39.1
443:  Compiling proc-macro2 v1.0.106
444:  Compiling quote v1.0.45
445:  Compiling unicode-ident v1.0.24
446:  Compiling libc v0.2.184
447:  Compiling memchr v2.8.0
448:  Compiling cfg-if v1.0.4
449:  Compiling aho-corasick v1.1.4
450:  Compiling regex-syntax v0.8.10
451:  Compiling syn v2.0.117
452:  Compiling smallvec v1.15.1
453:  Compiling once_cell v1.21.4
454:  Compiling bytes v1.11.1
455:  Compiling regex-automata v0.4.14
456:  Compiling bstr v1.12.1
457:  Compiling thiserror v2.0.18
458:  Compiling thiserror-impl v2.0.18
459:  Compiling stable_deref_trait v1.2.1
...

476:  Compiling crossbeam-channel v0.5.15
477:  Compiling same-file v1.0.6
478:  Compiling walkdir v2.5.0
479:  Compiling prodash v31.0.0
480:  Compiling zlib-rs v0.6.3
481:  Compiling equivalent v1.0.2
482:  Compiling typenum v1.19.0
483:  Compiling version_check v0.9.5
484:  Compiling generic-array v0.14.7
485:  Compiling heapless v0.8.0
486:  Compiling byteorder v1.5.0
487:  Compiling hash32 v0.3.1
488:  Compiling crypto-common v0.1.7
489:  Compiling block-buffer v0.10.4
490:  Compiling digest v0.10.7
491:  Compiling gix-error v0.0.0
492:  Compiling faster-hex v0.10.0
...

712:  Compiling num-conv v0.2.1
713:  Compiling time v0.3.47
714:  Compiling abscissa_derive v0.9.0
715:  Compiling tracing-subscriber v0.3.23
716:  Compiling color-eyre v0.6.5
717:  Compiling cargo-lock v11.0.1
718:  Compiling auditable-info v0.10.0
719:  Compiling clap v4.6.0
720:  Compiling quitters v0.1.0
721:  Compiling platforms v3.10.0
722:  Compiling cvss v2.2.0
723:  Compiling secrecy v0.10.3
724:  Compiling wait-timeout v0.2.1
725:  Compiling canonical-path v2.0.2
726:  Compiling termcolor v1.4.1
727:  Compiling display-error-chain v0.2.2
728:  Compiling abscissa_core v0.9.0
...

737:  Compiling gix v0.78.0
738:  Compiling rustsec v0.32.0
739:  Compiling cargo-audit v0.22.1
740:  Finished `release` profile [optimized] target(s) in 2m 51s
741:  Installing /home/runner/.cargo/bin/cargo-audit
742:  Installed package `cargo-audit v0.22.1` (executable `cargo-audit`)
743:  ##[endgroup]
744:  [command]/home/runner/.cargo/bin/cargo generate-lockfile
745:  Updating crates.io index
746:  Locking 320 packages to latest compatible versions
747:  Adding axum v0.7.9 (available: v0.8.8)
748:  Adding colored v2.2.0 (available: v3.1.1)
749:  Adding generic-array v0.14.7 (available: v0.14.9)
750:  Adding indicatif v0.17.11 (available: v0.18.4)
751:  Adding reqwest v0.12.28 (available: v0.13.2)
752:  Adding thiserror v1.0.69 (available: v2.0.18)
753:  Adding tower-http v0.5.2 (available: v0.6.8)
754:  ##[group]Calling cargo-audit (JSON output)
755:  [command]/home/runner/.cargo/bin/cargo audit --json
756:  {"database":{"advisory-count":1043,"last-commit":"d99f7b9eb81731bddebf80a355f8be7b2f8b1b28","last-updated":"2026-04-11T13:38:57+02:00"},"lockfile":{"dependency-count":323},"settings":{"target_arch":[],"target_os":[],"severity":null,"ignore":[],"informational_warnings":["unmaintained","unsound","notice"]},"vulnerabilities":{"found":true,"count":1,"list":[{"advisory":{"id":"RUSTSEC-2023-0071","package":"rsa","title":"Marvin Attack: potential key recovery through timing sidechannels","description":"### Impact\nDue to a non-constant-time implementation, information about the private key is leaked through timing information which is observable over the network. An attacker may be able to use that information to recover the key.\n\n### Patches\nNo patch is yet available, however work is underway to migrate to a fully constant-time implementation.\n\n### Workarounds\nThe only currently available workaround is to avoid using the `rsa` crate in settings where attackers are able to observe timing information, e.g. local use on a non-compromised computer is fine.\n\n### References\nThis vulnerability was discovered as part of the \"[Marvin Attack]\", which revealed several implementations of RSA including OpenSSL had not properly mitigated timing sidechannel attacks.\n\n[Marvin Attack]: https://people.redhat.com/~hkario/marvin/","date":"2023-11-22","aliases":["CVE-2023-49092","GHSA-c38w-74pg-36hr","GHSA-4grx-2x9w-596c"],"related":[],"collection":"crates","categories":["crypto-failure"],"keywords":["cryptography"],"cvss":"CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N","informational":null,"references":["https://people.redhat.com/~hkario/marvin/"],"source":null,"url":"https://github.com/RustCrypto/RSA/issues/19#issuecomment-1822995643","withdrawn":null,"license":"CC0-1.0","expect-deleted":false},"versions":{"patched":[],"unaffected":[]},"affected":null,"package":{"name":"rsa","version":"0.9.10","source":"registry+https://github.com/rust-lang/crates.io-index","checksum":"b8573f03f5883dcaebdfcf4725caa1ecb9c15b2ef50c43a07b816e06799bb12d","dependencies":[{"name":"const-oid","version":"0.9.6","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"digest","version":"0.10.7","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"num-bigint-dig","version":"0.8.6","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"num-integer","version":"0.1.46","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"num-traits","version":"0.2.19","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"pkcs1","version":"0.7.5","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"pkcs8","version":"0.10.2","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"rand_core","version":"0.6.4","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"signature","version":"2.2.0","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"spki","version":"0.7.3","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"subtle","version":"2.6.1","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"zeroize","version":"1.8.2","source":"registry+https://github.com/rust-lang/crates.io-index"}],"replace":null}}]},"warnings":{"unmaintained":[{"kind":"unmaintained","package":{"name":"number_prefix","version":"0.4.0","source":"registry+https://github.com/rust-lang/crates.io-index","checksum":"830b246a0e5f20af87141b25c173cd1b609bd7779a4617d6ec582abaf90870f3","replace":null},"advisory":{"id":"RUSTSEC-2025-0119","package":"number_prefix","title":"number_prefix crate is unmaintained","description":"The `number_prefix` crate is no longer actively maintained.  If you rely on this crate, consider switching to a recommended alternative.\n\n## Recommended alternatives\n\n- [`unit-prefix`](https://crates.io/crates/unit-prefix)","date":"2025-11-17","aliases":[],"related":[],"collection":"crates","categories":[],"keywords":[],"cvss":null,"informational":"unmaintained","references":[],"source":null,"url":"https://github.com/ogham/rust-number-prefix/pull/8","withdrawn":null,"license":"CC0-1.0","expect-deleted":false},"affected":null,"versions":{"patched":[],"unaffected":[]}}],"unsound":[{"kind":"unsound","package":{"name":"rand","version":"0.8.5","source":"registry+https://github.com/rust-lang/crates.io-index","checksum":"34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404","dependencies":[{"name":"libc","version":"0.2.184","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"rand_chacha","version":"0.3.1","source":"registry+https://github.com/rust-lang/crates.io-index"},{"name":"rand_core","version":"0.6.4","source":"registry+https://github.com/rust-lang/crates.io-index"}],"replace":null},"advisory":{"id":"RUSTSEC-2026-0097","package":"rand","title":"Rand is unsound with a custom logger using `rand::rng()`","description":"It has been reported (by @lopopolo) that the `rand` library is [unsound](https://rust-lang.github.io/unsafe-code-guidelines/glossary.html#soundness-of-code--of-a-library) (i.e. that safe code using the public API can cause Undefined Behaviour) when all the following conditions are met:\n\n- The `log` and `thread_rng` features are enabled\n- A [custom logger](https://docs.rs/log/latest/log/#implementing-a-logger) is defined\n- The custom logger accesses `rand::rng()` (previously `rand::thread_rng()`) and calls any `TryRng` (previously `RngCore`) methods on `ThreadRng`\n- The `ThreadRng` (attempts to) reseed while called from the custom logger (this happens every 64 kB of generated data)\n- Trace-level logging is enabled or warn-level logging is enabled and the random source (the `getrandom` crate) is unable to provide a new seed\n\n`TryRng` (previously `RngCore`) methods for `ThreadRng` use `unsafe` code to cast `*mut BlockRng<ReseedingCore>` to `&mut BlockRng<ReseedingCore>`. When all the above conditions are met this results in an aliased mutable reference, violating the Stacked Borrows rules. Miri is able to detect this violation in sample code. Since construction of [aliased mutable references is Undefined Behaviour](https://doc.rust-lang.org/stable/nomicon/references.html), the behaviour of optimized builds is hard to predict.\n\nAffected versions of `rand` are `>= 0.7, < 0.9.3` and `0.10.0`.","date":"2026-04-09","aliases":[],"related":[],"collection":"crates","categories":[],"keywords":[],"cvss":null,"informational":"unsound","references":[],"source":null,"url":"https://github.com/rust-random/rand/pull/1763","withdrawn":null,"license":"CC0-1.0","expect-deleted":false},"affected":{"arch":[],"os":[],"functions":{"rand::rng":[">=0.9.0"],"rand::thread_rng":["<0.10.0, >=0.7.0"]}},"versions":{"patched":[">=0.10.1","<0.10.0, >=0.9.3"],"unaffected":["<0.7.0"]}}]}}
757:  ##[endgroup]
758:  ##[warning]1 vulnerabilities found!
759:  ##[warning]2 warnings found!
760:  Found 1 advisory(ies), 1 unmaintained, 1 other
761:  ##[error]Unable to publish audit check! Reason: HttpError: Resource not accessible by integration
762:  ##[warning]It seems that this Action is executed from the forked repository.
...

782:  This vulnerability was discovered as part of the &quot;[Marvin Attack]&quot;, which revealed several implementations of RSA including OpenSSL had not properly mitigated timing sidechannel attacks.
783:  [Marvin Attack]: https://people.redhat.com/~hkario/marvin/
784:  ## Warnings
785:  ### [RUSTSEC-2025-0119](https://rustsec.org/advisories/RUSTSEC-2025-0119.html)
786:  > number_prefix crate is unmaintained
787:  | Details             |                                                |
788:  | ------------------- | ---------------------------------------------- |
789:  | Status              | unmaintained                |
790:  | Package             | `number_prefix`                      |
791:  | Version             | `0.4.0`                   |
792:  | URL                 | [https://github.com/ogham/rust-number-prefix/pull/8](https://github.com/ogham/rust-number-prefix/pull/8) |
793:  | Date                | 2025-11-17                         |
794:  The `number_prefix` crate is no longer actively maintained.  If you rely on this crate, consider switching to a recommended alternative.
795:  ## Recommended alternatives
796:  - [`unit-prefix`](https://crates.io/crates/unit-prefix)
797:  ##[error]Critical vulnerabilities were found, marking check as failed
798:  Post job cleanup.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (2)
crates/scanner/src/lib.rs (1)

183-185: ⚠️ Potential issue | 🟠 Major

scan() can still report success after an internal scan failure.

This spawned task drops the scan_stream result, so failures before a terminal ScanEvent is sent — for example ai::create_provider(...) returning Err — just close the channel and let scan() fall through to Ok(findings). Please forward the error onto tx before the task exits.

Suggested fix
     tokio::spawn(async move {
-        let _ = scan_stream(config_clone, tx).await;
+        if let Err(e) = scan_stream(config_clone, tx.clone()).await {
+            let _ = tx.send(ScanEvent::Error(e.to_string()));
+        }
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/lib.rs` around lines 183 - 185, The spawned task currently
ignores the Result from scan_stream so scan() can return Ok(...) even if
scan_stream failed; change the task body in the tokio::spawn block that calls
scan_stream(config_clone, tx) so that it matches on the Result and sends an
error-bearing ScanEvent (or otherwise forwards the error) onto tx before exiting
(e.g., convert Err from ai::create_provider(...) or other internal errors into a
terminal ScanEvent and tx.send it), ensuring scan_stream and scan() observe
failures rather than silently closing the channel.
crates/cli/src/main.rs (1)

186-195: ⚠️ Potential issue | 🟠 Major

config show still prints ai_api_key in plaintext.

This branch serializes the full config, so the saved API key ends up in terminal history, logs, and screenshots. The previous review already flagged this and the current code still exposes it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` around lines 186 - 195, The ConfigAction::Show branch
currently serializes and prints the full `config` (via
serde_json::to_string_pretty), exposing `ai_api_key`; instead, create a redacted
copy or transform of the config before serializing (e.g., clone `config`, set
`ai_api_key` to a masked value like "****" or an empty string) and then call
serde_json::to_string_pretty on that redacted object; update the println block
that shows the path (ZenvraConfig::get_path) and the JSON print to use the
redacted config to ensure secrets are never printed to stdout or saved in shell
history.
🧹 Nitpick comments (1)
extensions/vscode/build-vsix.cjs (1)

18-18: Avoid hardcoding the VSIX version in the output filename.

Line 18 duplicates version data already in extensions/vscode/package.json; this will drift on the next version bump.

Proposed fix
+const pkg = require('./package.json');
@@
-    vsce.main(['package', '--no-git-check', '-o', 'zenvra-0.1.1-rc.2.vsix']).then(() => {
+    vsce.main(['package', '--no-git-check', '-o', `zenvra-${pkg.version}.vsix`]).then(() => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/build-vsix.cjs` at line 18, The build script hardcodes the
VSIX filename in the vsce.main(['package', ...]) call causing version drift;
update the script to read the extension version from
extensions/vscode/package.json (parse JSON and extract the "version" field) and
construct the output name dynamically (e.g. `zenvra-${version}.vsix`) before
calling vsce.main (the existing vsce.main invocation should use that computed
filename), using path/fs to resolve the package.json location and failing loudly
if version is missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/cli/src/main.rs`:
- Around line 137-145: ZenvraConfig::load currently swallows malformed JSON by
using unwrap_or_default; change load to return a Result<Self, Error> (or
Result<Self, anyhow::Error>) and propagate errors from std::fs::read_to_string
and serde_json::from_str instead of falling back to Self::default(); update
callers to handle the Result. Specifically, modify the ZenvraConfig::load
signature to return Result<Self, _>, use Self::get_path() then map
read_to_string errors and pass the file content into serde_json::from_str
returning its Err on parse failure (only use Self::default() explicitly where a
true missing-file case is desired and handled), and update code that calls
ZenvraConfig::load to handle the error (e.g., surface to the user).

In `@crates/scanner/src/ai/mod.rs`:
- Around line 42-49: Incoming JSON uses "provider": "openai" but ProviderKind's
current serde rename expects "open_ai", causing deserialization failures when
ScanRequest contains ai_config: Option<AiConfig>. Fix by updating the
ProviderKind enum to accept the extension's token (e.g., add a serde
rename/alias for the OpenAi variant such as #[serde(rename = "openai")] or
#[serde(alias = "openai")]) so deserialization of AiConfig::provider succeeds;
keep references to AiConfig and ScanRequest intact.

In `@crates/scanner/src/lib.rs`:
- Around line 119-125: The error logging in the engine::run_stream match arm
uses file_config.file_path.as_ref().unwrap(), which can panic; change the log to
avoid unwrap by using file_config.file_path.as_deref().unwrap_or("<unknown>")
(or capture the original file path earlier into a local Option/str and use
unwrap_or("<unknown>")) so tracing::error prints a safe path string without
risking a panic in the error path of run_stream.

In `@crates/server/src/main.rs`:
- Around line 85-89: The connection error message currently includes the full
DATABASE_URL (db_url) which may leak credentials; update the error handling
around PgPoolOptions::new().connect(&db_url).await (the pool creation) to avoid
logging db_url directly — either log a generic message like "Failed to connect
to PostgreSQL" or extract and log only a redacted host/database identifier (no
user/password) before creating the anyhow::anyhow! error; ensure the change
touches the closure passed to .map_err so the secret never appears in the error
string.
- Around line 337-397: The workspace scan loop currently never moves the
completed scan into in-memory state or removes its live sender; replicate the
single-file completion behavior by, when handling ScanEvent::Complete in the
spawned task, inserting the collected findings (or a ScanResult-like value) into
state_task.results under scan_id and removing the live sender from
state_task.scans (and drop/close the sender if required) before breaking; locate
this logic around the ScanEvent::Complete match branch in the tokio::spawn block
and perform the state_task.results.insert(scan_id, findings) and
state_task.scans.remove(&scan_id) operations (using the same types/structures as
single-file scan finalization).
- Around line 229-246: The current SQL insert in sqlx::query persists full
source for each finding (columns vulnerable_code and fixed_code bound from
finding.vulnerable_code and finding.fixed_code); change it to avoid storing raw
code by removing those columns and .bind calls, instead persist only metadata
and a short snippet or hash (e.g., compute a sha256 or store a truncated
snippet) derived from finding.vulnerable_code/fixed_code and bind that value;
update the INSERT column list and corresponding .bind(...) calls in the block
that executes against state_task.db (and the scan_results schema if needed) so
only scan_id, engine, cve_id, cwe_id, severity, title, description, file_path,
line_start/line_end and the new snippet_hash/snippet column are stored.

In `@extensions/vscode/build-vsix.cjs`:
- Line 15: The code imports the internal build output path via "const vsce =
require('@vscode/vsce/out/main')" which is unsupported; replace that require
with the public entry: use "const vsce = require('@vscode/vsce')" and update
downstream calls to use the public API (e.g. call vsce.createVSIX(...) for
packaging or vsce.publish/listFiles as appropriate), ensuring any options/return
handling matches the public function signatures used by createVSIX/publish.

In `@extensions/vscode/package-vsix.cjs`:
- Line 21: The current env assignment ({ ...process.env, NODE_OPTIONS:
'--no-warnings' }) overwrites any existing NODE_OPTIONS; change it to merge the
existing process.env.NODE_OPTIONS with the '--no-warnings' flag instead of
replacing it: read process.env.NODE_OPTIONS, append '--no-warnings' when it's
not already present, and set that combined string as NODE_OPTIONS in the env
spread so existing CI/runtime flags are preserved; update the env construction
that sets NODE_OPTIONS accordingly.
- Around line 3-11: The polyfill for global.File only affects the parent process
but vsce is invoked via child processes using execSync (symbol: execSync) so
undici in Node 18+ fails; replace the execSync-based invocations of "vsce" with
an in-process invocation using the programmatic API (as implemented in
build-vsix.cjs) so the polyfilled global.File is present; locate the execSync
calls in package-vsix.cjs and refactor them to require/import and call the same
programmatic functions used by build-vsix.cjs (or directly use the vsce package
API) rather than spawning a subprocess.

In `@extensions/vscode/src/extension.ts`:
- Around line 77-104: The workspace discovery misses some scanner-supported
types, doesn't match Dockerfile, and incorrectly defaults extensionless files to
'js'; update the supportedExtensions array to include all CLI-scanned extensions
(add missing ones like 'dockerfile' as both lowercase and an explicit filename
match if needed), change globPattern to also account for filename-only matches
(e.g., include '**/Dockerfile' or adjust pattern to support names without
extensions), and in the files -> workspaceFiles mapping replace the naive
fallback ext = relativePath.split('.').pop() || 'js' with logic that detects
extensionless filenames (like 'Dockerfile') or returns an explicit language
mapping (use the file basename check and a lookup map) so extensionless files
are classified correctly (refer to supportedExtensions, globPattern,
excludePattern, files, and workspaceFiles).
- Around line 156-193: The inner try/catch is swallowing server 'error' events
because throw new Error(event.data) is caught immediately; change the flow so
'error' events are propagated to the outer handler instead of being thrown
inside the parse block — e.g., in the switch for event.type === 'error' set a
local variable (e.g., serverError = event.data), post a message to
sidebarProvider ({ type: 'error', data: event.data }) and after the try block,
if serverError is set call the outer error handling path (throw new
Error(serverError) or invoke the same handler used elsewhere) so
updateDiagnosticsForUri, status messages and the outer error handler see the
failure; avoid removing JSON.parse error handling — only move the
throw/propagation outside the catch scope.
- Around line 147-156: The SSE parsing in scanWorkspace and scanDocument
discards fragments when a chunk splits an event; fix both by maintaining a
persistent string buffer outside the reader loop, append decoder.decode(value,
{stream: true}) to that buffer each iteration, then split buffer on '\n' and
process every complete line except the final partial fragment (keep that
trailing piece as the new buffer for the next loop); after reader.done is true,
process any remaining complete lines and handle a final partial line if it forms
a full SSE event. Update the parsing blocks in scanWorkspace and scanDocument to
use this buffered-accumulation approach so "data: " JSON is never lost across
chunk boundaries.

---

Duplicate comments:
In `@crates/cli/src/main.rs`:
- Around line 186-195: The ConfigAction::Show branch currently serializes and
prints the full `config` (via serde_json::to_string_pretty), exposing
`ai_api_key`; instead, create a redacted copy or transform of the config before
serializing (e.g., clone `config`, set `ai_api_key` to a masked value like
"****" or an empty string) and then call serde_json::to_string_pretty on that
redacted object; update the println block that shows the path
(ZenvraConfig::get_path) and the JSON print to use the redacted config to ensure
secrets are never printed to stdout or saved in shell history.

In `@crates/scanner/src/lib.rs`:
- Around line 183-185: The spawned task currently ignores the Result from
scan_stream so scan() can return Ok(...) even if scan_stream failed; change the
task body in the tokio::spawn block that calls scan_stream(config_clone, tx) so
that it matches on the Result and sends an error-bearing ScanEvent (or otherwise
forwards the error) onto tx before exiting (e.g., convert Err from
ai::create_provider(...) or other internal errors into a terminal ScanEvent and
tx.send it), ensuring scan_stream and scan() observe failures rather than
silently closing the channel.

---

Nitpick comments:
In `@extensions/vscode/build-vsix.cjs`:
- Line 18: The build script hardcodes the VSIX filename in the
vsce.main(['package', ...]) call causing version drift; update the script to
read the extension version from extensions/vscode/package.json (parse JSON and
extract the "version" field) and construct the output name dynamically (e.g.
`zenvra-${version}.vsix`) before calling vsce.main (the existing vsce.main
invocation should use that computed filename), using path/fs to resolve the
package.json location and failing loudly if version is missing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0b883fff-6097-4c83-a084-fa31a9a0130a

📥 Commits

Reviewing files that changed from the base of the PR and between c9ecbb2 and 2f34dee.

📒 Files selected for processing (13)
  • crates/cli/src/main.rs
  • crates/scanner/src/ai/mod.rs
  • crates/scanner/src/lib.rs
  • crates/server/src/main.rs
  • extensions/vscode/LICENSE
  • extensions/vscode/build-vsix.cjs
  • extensions/vscode/package-vsix.cjs
  • extensions/vscode/package.json
  • extensions/vscode/polyfill-build.cjs
  • extensions/vscode/src/extension.ts
  • extensions/vscode/src/sidebarProvider.ts
  • extensions/vscode/src/types.ts
  • extensions/vscode/zenvra-0.1.1-rc.2.vsix
✅ Files skipped from review due to trivial changes (2)
  • extensions/vscode/LICENSE
  • extensions/vscode/polyfill-build.cjs
🚧 Files skipped from review as they are similar to previous changes (3)
  • extensions/vscode/src/sidebarProvider.ts
  • extensions/vscode/package.json
  • extensions/vscode/src/types.ts

Comment on lines +137 to 145
impl ZenvraConfig {
fn load() -> Self {
let config_path = Self::get_path();
if let Ok(content) = std::fs::read_to_string(config_path) {
serde_json::from_str(&content).unwrap_or_default()
} else {
Self::default()
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Surface invalid config instead of silently resetting it.

unwrap_or_default() turns a malformed config file into an empty config, so scans silently ignore saved settings and the next config set can overwrite the broken file. Returning Result<Self> here would preserve that failure mode for the caller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/cli/src/main.rs` around lines 137 - 145, ZenvraConfig::load currently
swallows malformed JSON by using unwrap_or_default; change load to return a
Result<Self, Error> (or Result<Self, anyhow::Error>) and propagate errors from
std::fs::read_to_string and serde_json::from_str instead of falling back to
Self::default(); update callers to handle the Result. Specifically, modify the
ZenvraConfig::load signature to return Result<Self, _>, use Self::get_path()
then map read_to_string errors and pass the file content into
serde_json::from_str returning its Err on parse failure (only use
Self::default() explicitly where a true missing-file case is desired and
handled), and update code that calls ZenvraConfig::load to handle the error
(e.g., surface to the user).

Comment on lines +42 to 49
#[serde(rename_all = "snake_case")]
pub struct AiConfig {
/// Which provider to use.
pub provider: ProviderKind,

/// API key for the provider.
#[serde(alias = "apiKey")]
pub api_key: String,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, examine the ProviderKind enum definition
cat -n crates/scanner/src/ai/mod.rs | head -100

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 4143


🏁 Script executed:

# Search for all uses of "openai" string in the codebase
rg '"openai"' --type rust --type typescript

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 111


🏁 Script executed:

# Check the ProviderKind enum more carefully, looking for serde attributes
ast-grep --pattern 'enum ProviderKind { $$$ }'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Verify serde behavior by looking at how ProviderKind is currently configured
rg -A 10 'enum ProviderKind'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 672


🏁 Script executed:

# Search for "openai" in Rust files
rg '"openai"' crates/ --type rust

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 229


🏁 Script executed:

# Also check for openai in uppercase or mixed case
rg -i 'openai|open_ai' crates/ --type rust | head -50

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 3531


🏁 Script executed:

# Check the CLI main.rs and server main.rs for provider handling
cat -n crates/cli/src/main.rs | sed -n '330,360p'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1423


🏁 Script executed:

# Check server main.rs as well
cat -n crates/server/src/main.rs | sed -n '500,520p'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1039


🏁 Script executed:

# Look for extension.ts
find . -name 'extension.ts' -type f

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 113


🏁 Script executed:

# Check if there are any tests that validate the serde behavior
rg -A 5 'test.*openai|test.*provider' crates/scanner --type rust

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Check the VSCode extension for "openai" usage
cat -n extensions/vscode/src/extension.ts | grep -i openai

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Check what the server payload type is for the models endpoint
cat -n crates/server/src/main.rs | sed -n '490,505p'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 601


🏁 Script executed:

# Search for where AiConfig might be deserialized from JSON
rg 'AiConfig' crates/ --type rust -B 2 -A 2

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 2455


🏁 Script executed:

# Check if there are any tests or examples that deserialize AiConfig
rg -A 10 'serde_json.*AiConfig|from_str.*AiConfig|from_json.*AiConfig' crates/ --type rust

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Look for where ScanRequest is used as input to a server endpoint
rg 'ScanRequest' crates/server/src/main.rs -B 5 -A 10

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1515


🏁 Script executed:

# Check if there's a POST endpoint that accepts ScanRequest
rg 'post\|Post\|Json<ScanRequest' crates/server/src/main.rs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Look at the ScanRequest definition in lib.rs
cat -n crates/scanner/src/lib.rs | sed -n '1,150p'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 5819


🏁 Script executed:

# Check git history to see if AiConfig or ProviderKind serde attributes changed recently
git log --oneline -20 -- crates/scanner/src/ai/mod.rs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 188


🏁 Script executed:

# Check if there are any actual tests that send JSON with provider field
rg -A 10 'test.*ai_config\|#\[test\]' crates/scanner/src/ai/ --type rust

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 59


🏁 Script executed:

# Look for any example configs or documentation showing expected JSON format
find . -name '*.json' -o -name '*.md' | xargs grep -l 'ai_config\|provider' 2>/dev/null | head -10

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 158


🏁 Script executed:

# Check if the server actually deserializes AiConfig from incoming JSON
cat -n crates/server/src/main.rs | sed -n '200,250p'

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 2826


🏁 Script executed:

# Check VSCode extension for how it sends AI config to the server
cat -n extensions/vscode/src/extension.ts

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 14601


🏁 Script executed:

# Also check if there are any example requests or API documentation
cat -n CHANGELOG.md | head -100

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1582


🏁 Script executed:

# More specifically, search for any place in the codebase where JSON is sent with provider
rg 'provider.*openai|"provider"' extensions/ docs/ --type typescript --type json

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 111


openai JSON payloads from the VSCode extension will fail to deserialize.

The extension sends JSON with "provider": "openai", but ProviderKind with #[serde(rename_all = "snake_case")] expects "open_ai". When the server deserializes the incoming ScanRequest with ai_config: Option<AiConfig>, the enum variant deserialization will fail.

Suggested fix
 pub enum ProviderKind {
     Anthropic,
+    #[serde(rename = "openai", alias = "open_ai")]
     OpenAi,
     Google,
     Custom,
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/ai/mod.rs` around lines 42 - 49, Incoming JSON uses
"provider": "openai" but ProviderKind's current serde rename expects "open_ai",
causing deserialization failures when ScanRequest contains ai_config:
Option<AiConfig>. Fix by updating the ProviderKind enum to accept the
extension's token (e.g., add a serde rename/alias for the OpenAi variant such as
#[serde(rename = "openai")] or #[serde(alias = "openai")]) so deserialization of
AiConfig::provider succeeds; keep references to AiConfig and ScanRequest intact.

Comment on lines +119 to +125
let raw_findings = match engine::run_stream(&file_config, tx.clone()).await {
Ok(f) => f,
Err(e) => {
tracing::error!(
"Engine failed for {}: {}",
file_config.file_path.as_ref().unwrap(),
e
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Avoid unwrap() in this library error path.

file_path is populated today, but panicking while handling an engine failure makes this path brittle for future callers. Log as_deref().unwrap_or("<unknown>") or keep the original file.path around instead. As per coding guidelines, "crates/{scanner,server}/**/*.rs: Do not use .unwrap() or .expect() in library and API code".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/scanner/src/lib.rs` around lines 119 - 125, The error logging in the
engine::run_stream match arm uses file_config.file_path.as_ref().unwrap(), which
can panic; change the log to avoid unwrap by using
file_config.file_path.as_deref().unwrap_or("<unknown>") (or capture the original
file path earlier into a local Option/str and use unwrap_or("<unknown>")) so
tracing::error prints a safe path string without risking a panic in the error
path of run_stream.

Comment on lines +85 to +89
let pool = PgPoolOptions::new()
.max_connections(20)
.connect(&db_url)
.await
.map_err(|e| anyhow::anyhow!("Failed to connect to PostgreSQL at {}: {}", db_url, e))?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not include DATABASE_URL in the connection error.

If the URL contains credentials, this path leaks them into logs and process output on startup failures. Prefer a generic message or log only a redacted host/database identifier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 85 - 89, The connection error message
currently includes the full DATABASE_URL (db_url) which may leak credentials;
update the error handling around PgPoolOptions::new().connect(&db_url).await
(the pool creation) to avoid logging db_url directly — either log a generic
message like "Failed to connect to PostgreSQL" or extract and log only a
redacted host/database identifier (no user/password) before creating the
anyhow::anyhow! error; ensure the change touches the closure passed to .map_err
so the secret never appears in the error string.

Comment on lines +229 to +246
// Persist individual finding
if let Err(e) = sqlx::query(
"INSERT INTO scan_results (scan_id, engine, cve_id, cwe_id, severity, title, description, vulnerable_code, fixed_code, line_start, line_end, file_path)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)"
)
.bind(scan_id)
.bind(format!("{:?}", finding.engine))
.bind(&finding.cve_id)
.bind(&finding.cwe_id)
.bind(finding.severity.to_string())
.bind(&finding.title)
.bind(&finding.description)
.bind(&finding.vulnerable_code)
.bind(&finding.fixed_code)
.bind(finding.line_start as i32)
.bind(finding.line_end as i32)
.bind(&finding.file_path)
.execute(&state_task.db)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stop persisting raw code in scan_results.

This stores full vulnerable_code (and the generated replacement) in PostgreSQL for every finding. That creates long-lived retention of user source and any embedded secrets; the workspace path below repeats the same pattern. Persist metadata/snippets/hashes only, and keep full code transient in the live scan stream if you still need it. Based on learnings: "Do not store raw code in the database longer than needed — store scan results only".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/server/src/main.rs` around lines 229 - 246, The current SQL insert in
sqlx::query persists full source for each finding (columns vulnerable_code and
fixed_code bound from finding.vulnerable_code and finding.fixed_code); change it
to avoid storing raw code by removing those columns and .bind calls, instead
persist only metadata and a short snippet or hash (e.g., compute a sha256 or
store a truncated snippet) derived from finding.vulnerable_code/fixed_code and
bind that value; update the INSERT column list and corresponding .bind(...)
calls in the block that executes against state_task.db (and the scan_results
schema if needed) so only scan_id, engine, cve_id, cwe_id, severity, title,
description, file_path, line_start/line_end and the new snippet_hash/snippet
column are stored.

Comment on lines +3 to +11
if (typeof global.File === 'undefined') {
global.File = class File extends Blob {
constructor(blobParts, fileName, options = {}) {
super(blobParts, options);
this.name = fileName;
this.lastModified = options.lastModified || Date.now();
}
};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n extensions/vscode/package-vsix.cjs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1183


🏁 Script executed:

# Check for Node.js version constraints
cat package.json | grep -A 5 '"node"' || echo "No explicit node version found in package.json"

# Check if there are any error logs or comments about undici/File issues
rg -i "undici|file.*error|node.*18" --type json --type ts --type js --max-count 5

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 185


🏁 Script executed:

# Find package.json locations
fd package.json

# Also check if there's a .nvmrc or similar
fd -e nvmrc -e versions

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 129


🏁 Script executed:

# Check the vscode extension package.json
cat extensions/vscode/package.json | head -50

# Check if vsce is a dependency and what version
grep -A 2 -B 2 "vsce\|node.*version" extensions/vscode/package.json || echo "Not found with grep"

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1574


🏁 Script executed:

# Search for any error logs or issues related to File/undici
rg -i "file|undici|error" extensions/vscode --type json --type md --type txt --max-count 10

# Check GitHub workflows to see if Node 18 is used
fd -e yml -e yaml | xargs grep -l "node.*version\|18\|20" 2>/dev/null || echo "No version info found"

# Check if there are any existing issues or TODOs about this
rg -i "polyfill|child.*process|vsce" extensions/vscode

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 4247


🏁 Script executed:

cat -n extensions/vscode/build-vsix.cjs

# Also check package.json scripts to see which one is actually used
grep -A 5 '"scripts"' extensions/vscode/package.json

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1327


🏁 Script executed:

# Check if either .cjs file is referenced elsewhere or called by npm scripts
grep -r "package-vsix\|build-vsix" extensions/vscode/ --exclude-dir=node_modules --exclude="*.lock*"

# Check git history to see which file is newer
git log --oneline -n 20 -- extensions/vscode/package-vsix.cjs extensions/vscode/build-vsix.cjs 2>/dev/null || echo "Git history not available"

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 273


🏁 Script executed:

# Check CI/build files for references
fd -e yml -e yaml -e json | xargs grep -l "package-vsix\|build-vsix" 2>/dev/null || echo "No CI references found"

# Check if there are any npm scripts or other references
grep -r "package-vsix\|build-vsix" extensions/vscode --exclude-dir=node_modules --exclude="*.lock*" --exclude="*.vsix"

# Check the git diff or status of these files
git log --oneline --all -- "extensions/vscode/package-vsix.cjs" "extensions/vscode/build-vsix.cjs" 2>/dev/null | head -10

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 211


global.File polyfill does not reach the vsce process spawned via execSync.

Lines 3–11 polyfill global.File in the parent process, but lines 19–22 and line 26 execute vsce in child processes via execSync(). Child processes do not inherit JavaScript globals from the parent; they get a fresh Node.js runtime. This will fail on Node 18+ where undici (a dependency of vsce) requires the File class.

The fix is to run vsce in-process using its programmatic API rather than spawning it as a subprocess. This approach is already implemented in build-vsix.cjs in the same directory.

Example fix
 const { execSync } = require('child_process');
+const { File } = require('buffer');

 // Polyfill File for undici in Node 18
 if (typeof global.File === 'undefined') {
-    global.File = class File extends Blob {
-        constructor(blobParts, fileName, options = {}) {
-            super(blobParts, options);
-            this.name = fileName;
-            this.lastModified = options.lastModified || Date.now();
-        }
-    };
+    global.File = File;
 }

 console.log('Starting packaging...');
-try {
-    execSync('npx -y `@vscode/vsce` package --no-git-check', { 
-        stdio: 'inherit',
-        env: { ...process.env, NODE_OPTIONS: '--no-warnings' }
-    });
-} catch (e) {
-    console.error('Packaging failed, trying fallback...');
-    execSync('./node_modules/.bin/vsce package --no-git-check', { stdio: 'inherit' });
-}
+try {
+    const vsce = require('@vscode/vsce/out/main');
+    vsce.main(['package', '--no-git-check']).then(() => {
+        console.log('VSIX generated successfully');
+    }).catch(err => {
+        console.error('Packaging failed:', err);
+        process.exit(1);
+    });
+} catch (e) {
+    console.error('Failed to load vsce:', e);
+    process.exit(1);
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/package-vsix.cjs` around lines 3 - 11, The polyfill for
global.File only affects the parent process but vsce is invoked via child
processes using execSync (symbol: execSync) so undici in Node 18+ fails; replace
the execSync-based invocations of "vsce" with an in-process invocation using the
programmatic API (as implemented in build-vsix.cjs) so the polyfilled
global.File is present; locate the execSync calls in package-vsix.cjs and
refactor them to require/import and call the same programmatic functions used by
build-vsix.cjs (or directly use the vsce package API) rather than spawning a
subprocess.

// Try to run vsce via npx
execSync('npx -y @vscode/vsce package --no-git-check', {
stdio: 'inherit',
env: { ...process.env, NODE_OPTIONS: '--no-warnings' }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd extensions/vscode && head -30 package-vsix.cjs | cat -n

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1183


🏁 Script executed:

wc -l extensions/vscode/package-vsix.cjs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 114


🏁 Script executed:

sed -n '1,35p' extensions/vscode/package-vsix.cjs | cat -n

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 1183


🏁 Script executed:

rg "NODE_OPTIONS" --type js --type cjs --type mjs

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 104


🏁 Script executed:

rg "process\.env" extensions/vscode/ --type js --type cjs --type mjs -A 2

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 104


🏁 Script executed:

rg "NODE_OPTIONS" .

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 176


🏁 Script executed:

rg "process\.env" extensions/vscode/ -A 2

Repository: Cameroon-Developer-Network/Zenvra

Length of output: 266


Merge NODE_OPTIONS instead of overwriting.

Line 21 overwrites any existing NODE_OPTIONS from the parent process environment. CI and runtime environments frequently set NODE_OPTIONS with essential flags (heap limits, security flags, etc.), and this code will drop them.

Proposed fix
-        env: { ...process.env, NODE_OPTIONS: '--no-warnings' }
+        env: {
+            ...process.env,
+            NODE_OPTIONS: [process.env.NODE_OPTIONS, '--no-warnings'].filter(Boolean).join(' ')
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
env: { ...process.env, NODE_OPTIONS: '--no-warnings' }
env: {
...process.env,
NODE_OPTIONS: [process.env.NODE_OPTIONS, '--no-warnings'].filter(Boolean).join(' ')
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/package-vsix.cjs` at line 21, The current env assignment ({
...process.env, NODE_OPTIONS: '--no-warnings' }) overwrites any existing
NODE_OPTIONS; change it to merge the existing process.env.NODE_OPTIONS with the
'--no-warnings' flag instead of replacing it: read process.env.NODE_OPTIONS,
append '--no-warnings' when it's not already present, and set that combined
string as NODE_OPTIONS in the env spread so existing CI/runtime flags are
preserved; update the env construction that sets NODE_OPTIONS accordingly.

Comment on lines +77 to +104
const supportedExtensions = [
'py', 'js', 'mjs', 'cjs', 'ts', 'tsx', 'jsx', 'rs', 'go', 'java',
'cs', 'cpp', 'cc', 'c', 'h', 'rb', 'php', 'swift', 'kt', 'kts',
'yaml', 'yml', 'toml', 'json', 'xml', 'env', 'sh', 'bash', 'zsh',
'dockerfile', 'svelte', 'vue'
];

const globPattern = `**/*.{${supportedExtensions.join(',')}}`;
const excludePattern = '{**/node_modules/**,**/target/**,**/.git/**,**/dist/**,**/build/**}';

const files = await vscode.workspace.findFiles(globPattern, excludePattern, 100); // Limit to 100 for now

if (files.length === 0) {
vscode.window.showInformationMessage('Zenvra: No scannable files found in workspace.');
return;
}

const workspaceFiles: WorkspaceFile[] = await Promise.all(
files.map(async (uri) => {
const content = await vscode.workspace.fs.readFile(uri);
const relativePath = vscode.workspace.asRelativePath(uri);
const ext = relativePath.split('.').pop() || 'js';

return {
path: relativePath,
code: Buffer.from(content).toString('utf8'),
language: ext
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Workspace file discovery no longer matches the scanner's supported types.

This list is missing several extensions the CLI scans, and the brace glob will not match Dockerfile at all. On top of that, extensionless files fall back to 'js', so even if they were discovered they'd be classified incorrectly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 77 - 104, The workspace
discovery misses some scanner-supported types, doesn't match Dockerfile, and
incorrectly defaults extensionless files to 'js'; update the supportedExtensions
array to include all CLI-scanned extensions (add missing ones like 'dockerfile'
as both lowercase and an explicit filename match if needed), change globPattern
to also account for filename-only matches (e.g., include '**/Dockerfile' or
adjust pattern to support names without extensions), and in the files ->
workspaceFiles mapping replace the naive fallback ext =
relativePath.split('.').pop() || 'js' with logic that detects extensionless
filenames (like 'Dockerfile') or returns an explicit language mapping (use the
file basename check and a lookup map) so extensionless files are classified
correctly (refer to supportedExtensions, globPattern, excludePattern, files, and
workspaceFiles).

Comment on lines +147 to +156
while (true) {
const { done, value } = await reader.read();
if (done) break;

const chunk = decoder.decode(value, { stream: true });
const lines = chunk.split('\n');

for (const line of lines) {
if (line.startsWith('data: ')) {
try {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

Script executed:

cat -n extensions/vscode/src/extension.ts | head -180 | tail -50

Repository: Cameroon-Developer-Network/Zenvra


Repository: Cameroon-Developer-Network/Zenvra
Exit code: 0

stdout:

   131	      throw new Error(errorMsg || response.statusText);
   132	    }
   133	
   134	    const { scan_id } = (await response.json()) as { scan_id: string };
   135	    
   136	    // Subscribe to SSE stream
   137	    const sseResponse = await fetch(`${apiUrl}/api/v1/scan/${scan_id}/events`);
   138	    const body = sseResponse.body;
   139	    if (!body) throw new Error('Failed to connect to event stream');
   140	
   141	    const reader = (body as any).getReader();
   142	    const decoder = new TextDecoder();
   143	    const allFindings: Record<string, Finding[]> = {};
   144	
   145	    sidebarProvider.postMessage({ type: 'progress', data: { message: `Scanning ${files.length} files...`, percentage: 10 } });
   146	
   147	    while (true) {
   148	      const { done, value } = await reader.read();
   149	      if (done) break;
   150	
   151	      const chunk = decoder.decode(value, { stream: true });
   152	      const lines = chunk.split('\n');
   153	
   154	      for (const line of lines) {
   155	        if (line.startsWith('data: ')) {
   156	          try {
   157	            const event = JSON.parse(line.slice(6));
   158	            
   159	            switch (event.type) {
   160	              case 'progress':
   161	                vscode.window.setStatusBarMessage(`$(sync~spin) Zenvra: ${event.data.message}`, 2000);
   162	                sidebarProvider.postMessage({ type: 'progress', data: event.data });
   163	                break;
   164	              case 'finding': {
   165	                const finding = event.data as Finding;
   166	                const filePath = finding.file_path || 'unknown';
   167	                if (!allFindings[filePath]) {
   168	                  allFindings[filePath] = [];
   169	                }
   170	                allFindings[filePath].push(finding);
   171	                
   172	                // Update diagnostics for this specific file
   173	                const workspaceFolder = vscode.workspace.workspaceFolders?.[0];
   174	                if (workspaceFolder) {
   175	                    const fileUri = vscode.Uri.joinPath(workspaceFolder.uri, filePath);
   176	                    updateDiagnosticsForUri(fileUri, allFindings[filePath]);
   177	                }
   178	                
   179	                sidebarProvider.postMessage({ type: 'finding', data: finding });
   180	                break;

Script executed:

sed -n '240,270p' extensions/vscode/src/extension.ts

Repository: Cameroon-Developer-Network/Zenvra


Repository: Cameroon-Developer-Network/Zenvra
Exit code: 0

stdout:


    const { scan_id } = (await response.json()) as { scan_id: string };
    
    // Subscribe to SSE stream
    const sseResponse = await fetch(`${apiUrl}/api/v1/scan/${scan_id}/events`);
    const body = sseResponse.body;
    if (!body) throw new Error('Failed to connect to event stream');

    const reader = (body as any).getReader();
    const decoder = new TextDecoder();
    const findings: Finding[] = [];

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = decoder.decode(value, { stream: true });
      const lines = chunk.split('\n');

      for (const line of lines) {
        if (line.startsWith('data: ')) {
          try {
            const event = JSON.parse(line.slice(6));
            
            switch (event.type) {
              case 'progress':
                vscode.window.setStatusBarMessage(`$(sync~spin) Zenvra: ${event.data.message}`, 2000);
                // Also notify sidebar
                sidebarProvider.postMessage({ type: 'progress', data: event.data });
                break;
              case 'finding':

Script executed:

rg -n "while \(true\)" extensions/vscode/src/extension.ts | head -5

Repository: Cameroon-Developer-Network/Zenvra


Repository: Cameroon-Developer-Network/Zenvra
Exit code: 0

stdout:

147:    while (true) {
252:    while (true) {

Script executed:

sed -n '147,200p' extensions/vscode/src/extension.ts

Repository: Cameroon-Developer-Network/Zenvra


Repository: Cameroon-Developer-Network/Zenvra
Exit code: 0

stdout:

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = decoder.decode(value, { stream: true });
      const lines = chunk.split('\n');

      for (const line of lines) {
        if (line.startsWith('data: ')) {
          try {
            const event = JSON.parse(line.slice(6));
            
            switch (event.type) {
              case 'progress':
                vscode.window.setStatusBarMessage(`$(sync~spin) Zenvra: ${event.data.message}`, 2000);
                sidebarProvider.postMessage({ type: 'progress', data: event.data });
                break;
              case 'finding': {
                const finding = event.data as Finding;
                const filePath = finding.file_path || 'unknown';
                if (!allFindings[filePath]) {
                  allFindings[filePath] = [];
                }
                allFindings[filePath].push(finding);
                
                // Update diagnostics for this specific file
                const workspaceFolder = vscode.workspace.workspaceFolders?.[0];
                if (workspaceFolder) {
                    const fileUri = vscode.Uri.joinPath(workspaceFolder.uri, filePath);
                    updateDiagnosticsForUri(fileUri, allFindings[filePath]);
                }
                
                sidebarProvider.postMessage({ type: 'finding', data: finding });
                break;
              }
              case 'complete': {
                const totalCount = Object.values(allFindings).flat().length;
                vscode.window.setStatusBarMessage(`$(shield) Zenvra: Workspace scan complete (${totalCount} issues)`, 5000);
                sidebarProvider.postMessage({ type: 'complete' });
                return;
              }
              case 'error':
                throw new Error(event.data);
            }
          } catch (e) {
            console.error('Error parsing SSE event:', e);
          }
        }
      }
    }
  } catch (err: unknown) {
    const errorMsg = err instanceof Error ? err.message : String(err);
    vscode.window.showErrorMessage(`Zenvra Workspace Scan Failed: ${errorMsg}`);
  }

Script executed:

sed -n '252,305p' extensions/vscode/src/extension.ts

Repository: Cameroon-Developer-Network/Zenvra


Repository: Cameroon-Developer-Network/Zenvra
Exit code: 0

stdout:

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = decoder.decode(value, { stream: true });
      const lines = chunk.split('\n');

      for (const line of lines) {
        if (line.startsWith('data: ')) {
          try {
            const event = JSON.parse(line.slice(6));
            
            switch (event.type) {
              case 'progress':
                vscode.window.setStatusBarMessage(`$(sync~spin) Zenvra: ${event.data.message}`, 2000);
                // Also notify sidebar
                sidebarProvider.postMessage({ type: 'progress', data: event.data });
                break;
              case 'finding':
                findings.push(event.data);
                updateDiagnostics(document, findings);
                sidebarProvider.postMessage({ type: 'finding', data: event.data });
                break;
              case 'complete': {
                const count = findings.length;
                if (count === 0) {
                  vscode.window.setStatusBarMessage('$(shield) Zenvra: No issues found', 3000);
                } else {
                  vscode.window.setStatusBarMessage(`$(warning) Zenvra: Found ${count} issue(s)`, 3000);
                }
                sidebarProvider.postMessage({ type: 'complete' });
                return;
              }
              case 'error':
                throw new Error(event.data);
            }
          } catch (e) {
            console.error('Error parsing SSE event:', e);
          }
        }
      }
    }
  } catch (err: unknown) {
    const errorMsg = err instanceof Error ? err.message : String(err);
    vscode.window.showErrorMessage(`Zenvra Scan Failed: ${errorMsg}`);
    vscode.window.setStatusBarMessage('$(error) Zenvra: Scan failed', 3000);
  }
}

function updateDiagnostics(document: vscode.TextDocument, findings: Finding[]): void {
  updateDiagnosticsForUri(document.uri, findings);
}

function updateDiagnosticsForUri(uri: vscode.Uri, findings: Finding[]): void {

Buffer SSE data across chunks before parsing it.

Both scanWorkspace (lines 147–200) and scanDocument (lines 252–305) share identical SSE parsing logic that splits each decoded chunk on \n. ReadableStream chunks are arbitrary in size and can fragment SSE events across boundaries. For example, a chunk may end mid-JSON (data: {"foo"), with the rest arriving in the next chunk. Splitting each chunk independently causes these fragments to be orphaned—they won't match the data: prefix check and are silently discarded. This causes intermittent loss of findings, progress, and complete events.

Accumulate incomplete lines in a buffer and only parse complete lines once they end with \n.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 147 - 156, The SSE parsing
in scanWorkspace and scanDocument discards fragments when a chunk splits an
event; fix both by maintaining a persistent string buffer outside the reader
loop, append decoder.decode(value, {stream: true}) to that buffer each
iteration, then split buffer on '\n' and process every complete line except the
final partial fragment (keep that trailing piece as the new buffer for the next
loop); after reader.done is true, process any remaining complete lines and
handle a final partial line if it forms a full SSE event. Update the parsing
blocks in scanWorkspace and scanDocument to use this buffered-accumulation
approach so "data: " JSON is never lost across chunk boundaries.

Comment on lines +156 to +193
try {
const event = JSON.parse(line.slice(6));

switch (event.type) {
case 'progress':
vscode.window.setStatusBarMessage(`$(sync~spin) Zenvra: ${event.data.message}`, 2000);
sidebarProvider.postMessage({ type: 'progress', data: event.data });
break;
case 'finding': {
const finding = event.data as Finding;
const filePath = finding.file_path || 'unknown';
if (!allFindings[filePath]) {
allFindings[filePath] = [];
}
allFindings[filePath].push(finding);

// Update diagnostics for this specific file
const workspaceFolder = vscode.workspace.workspaceFolders?.[0];
if (workspaceFolder) {
const fileUri = vscode.Uri.joinPath(workspaceFolder.uri, filePath);
updateDiagnosticsForUri(fileUri, allFindings[filePath]);
}

sidebarProvider.postMessage({ type: 'finding', data: finding });
break;
}
case 'complete': {
const totalCount = Object.values(allFindings).flat().length;
vscode.window.setStatusBarMessage(`$(shield) Zenvra: Workspace scan complete (${totalCount} issues)`, 5000);
sidebarProvider.postMessage({ type: 'complete' });
return;
}
case 'error':
throw new Error(event.data);
}
} catch (e) {
console.error('Error parsing SSE event:', e);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not swallow server error events inside the parse loop.

Right now throw new Error(event.data) is immediately caught by the inner catch and only logged as a parse error. That means failed scans never hit the outer error handler, so diagnostics/status can stay stale and the user gets no failure message.

Also applies to: 261-290

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/vscode/src/extension.ts` around lines 156 - 193, The inner
try/catch is swallowing server 'error' events because throw new
Error(event.data) is caught immediately; change the flow so 'error' events are
propagated to the outer handler instead of being thrown inside the parse block —
e.g., in the switch for event.type === 'error' set a local variable (e.g.,
serverError = event.data), post a message to sidebarProvider ({ type: 'error',
data: event.data }) and after the try block, if serverError is set call the
outer error handling path (throw new Error(serverError) or invoke the same
handler used elsewhere) so updateDiagnosticsForUri, status messages and the
outer error handler see the failure; avoid removing JSON.parse error handling —
only move the throw/propagation outside the catch scope.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request good first issue Good for newcomers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant