Skip to content

Staging-->Main#438

Merged
gkorland merged 23 commits intomainfrom
staging
Mar 2, 2026
Merged

Staging-->Main#438
gkorland merged 23 commits intomainfrom
staging

Conversation

@gkorland
Copy link
Copy Markdown
Contributor

@gkorland gkorland commented Feb 25, 2026

Summary by CodeRabbit

  • Chores

    • Updated backend and frontend dependencies and developer tools.
  • New Features

    • End-to-end CSRF protection: server middleware, CSRF cookie, and automatic client CSRF headers for state-changing requests.
    • Improved connection reliability with retry and polling support for database/connect flows.
  • Tests

    • Added comprehensive CSRF middleware tests and strengthened end-to-end tests with auth-state handling, retries, polling, extended timeouts, and improved diagnostics.
  • Bug Fixes

    • Frontend-serving now returns an explicit 404 JSON when the built app is missing.

Copilot AI and others added 10 commits February 24, 2026 12:48
- Add waitForGraphPresent() polling helper to apiCalls.ts to retry
  getGraphs() until expected graph appears instead of one-shot calls
- Add connectDatabaseWithRetry() helper to retry streaming connection
  on transient errors with diagnostic logging
- Enhance parseStreamingResponse() to log error message details
- Update all database.spec.ts tests to use scoped test.setTimeout(120000/180000)
- Increase waitForDatabaseConnection timeout to 90s in all DB connection tests
- Replace bare getGraphs() calls with waitForGraphPresent() polling
- Add console.log diagnostics throughout for easier CI debugging

Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
Bumps [playwright](https://github.com/microsoft/playwright-python) from 1.57.0 to 1.58.0.
- [Release notes](https://github.com/microsoft/playwright-python/releases)
- [Commits](microsoft/playwright-python@v1.57.0...v1.58.0)

---
updated-dependencies:
- dependency-name: playwright
  dependency-version: 1.58.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
…cific DB predicates, polling for deletion

- connectDatabaseWithRetry: wrap per-attempt logic in try/catch so
  network/parse exceptions don't abort retries; log with attempt# via
  console.error; backoff delay behaviour unchanged
- Add expect(messages.length).toBeGreaterThan(0) guard before accessing
  finalMessage in all 4 caller blocks (PostgreSQL API, MySQL API,
  PostgreSQL delete, MySQL delete)
- Fix UI-to-API test predicates from generic 'graphs.length > 0' to
  'testdb'/'_testdb' match, avoiding false positives on pre-existing graphs
- Replace wait(1000)+getGraphs() in both delete tests with
  waitForGraphPresent polling until the deleted graphId is absent

Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
- Rename waitForGraphPresent -> waitForGraphs in apiCalls.ts (more
  neutral name since it's used for both presence and absence checks)
- Update all 10 call sites in database.spec.ts accordingly
- Change outer test.describe -> test.describe.serial to prevent
  cross-test interference on local multi-worker runs (CI is already
  single-worker via workers: CI ? 1 : undefined in playwright.config.ts)

Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
Replace id.includes('testdb_delete') with
id === 'testdb_delete' || id.endsWith('_testdb_delete') in both
delete test predicates and find() calls so only the exact graph forms
('testdb_delete' or '{userId}_testdb_delete') match, preventing
accidental matches on unrelated graph names.

Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
…wright-logs

Fix flaky Playwright e2e tests: DB connection timeouts and streaming response errors
…ht-1.58.0

Bump playwright from 1.57.0 to 1.58.0
@gkorland gkorland requested a review from Copilot February 25, 2026 08:39
@overcut-ai
Copy link
Copy Markdown

overcut-ai bot commented Feb 25, 2026

Completed Working on "Code Review"

✅ Workflow completed successfully.


👉 View complete log

@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 25, 2026 08:39 Destroyed
@railway-app
Copy link
Copy Markdown

railway-app bot commented Feb 25, 2026

🚅 Deployed to the QueryWeaver-pr-438 environment in queryweaver

Service Status Web Updated (UTC)
QueryWeaver ✅ Success (View Logs) Web Mar 2, 2026 at 12:12 pm

@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 25, 2026 08:39 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging February 25, 2026 08:39 Inactive
@github-actions
Copy link
Copy Markdown

github-actions bot commented Feb 25, 2026

Dependency Review

The following issues were found:

  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 15 package(s) with unknown licenses.
  • ⚠️ 1 packages with OpenSSF Scorecard issues.

View full job summary

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds server-side CSRF middleware with double-submit cookie protection, client CSRF helpers, wires CSRF into frontend requests and E2E helpers, enhances E2E API utilities with polling/retry for DB connections, updates tests/pages to use request contexts and auth state, and bumps several dependencies.

Changes

Cohort / File(s) Summary
Dependency updates
Pipfile, app/package.json, .github/workflows/playwright.yml
Bumped FastAPI/uvicorn/litellm/playwright and @falkordb/canvas/globals; updated GitHub Action upload-artifact from v6→v7.
Server CSRF & routing
api/app_factory.py
Added _is_secure_request, CSRFMiddleware (double-submit cookie CSRF), CSRF cookie generation/validation, integrated middleware, and improved React asset 404 handling.
CSRF unit tests
tests/test_csrf_middleware.py
New comprehensive tests covering cookie issuance, safe-method bypass, token validation, Bearer/exempt-path bypasses, and HTTP/HTTPS cookie flags.
Frontend CSRF usage
app/src/lib/csrf.ts, app/src/components/modals/DatabaseModal.tsx, app/src/pages/Index.tsx, app/src/services/*.ts
New getCsrfToken()/csrfHeaders() and inclusion of X-CSRF-Token header in state-changing frontend requests (DB modal, schema refresh, auth logout, chat, tokens, DB service calls).
E2E CSRF-aware request helpers
e2e/infra/api/apiRequests.ts
Extracts CSRF token from Set-Cookie, per-request-context token cache (WeakMap), getCsrfToken(origin, ctx), originOf(), and updated post/patch/delete to include X-CSRF-Token.
E2E API utilities: polling & retry
e2e/logic/api/apiCalls.ts
ApiCalls now accepts a Playwright request context, uses a defaultRequestContext, improved streaming parse logging, and added waitForGraphs(...) and connectDatabaseWithRetry(...) methods.
E2E tests & auth-state pages
e2e/tests/database.spec.ts, e2e/tests/chat.spec.ts
Tests updated to pass Playwright request into ApiCalls, create pages with auth-state files, replace direct reads with polling (waitForGraphs), use connectDatabaseWithRetry, increase timeouts, and add logging for stability.
Token endpoints
app/src/services/tokens.ts
Added CSRF headers to token generation/deletion requests.

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,220,255,0.5)
    participant Browser as Browser
    end
    rect rgba(200,255,220,0.5)
    participant App as FastAPIApp
    participant Middleware as CSRFMiddleware
    end
    rect rgba(255,230,200,0.5)
    participant Handler as AppHandler
    end
    Browser->>App: GET /auth-status
    App->>Middleware: incoming request
    Middleware->>Handler: safe-method -> ensure csrf_token cookie (Set-Cookie)
    Handler-->>Browser: 200 OK (csrf_token cookie)
    Browser->>App: POST /graphs/... with X-CSRF-Token header
    App->>Middleware: validate cookie vs X-CSRF-Token
    alt token valid or bearer/exempt or path exempt
        Middleware->>Handler: forward request
        Handler-->>Browser: 200 OK
    else invalid/missing
        Middleware-->>Browser: 403 Forbidden (sets/refreshes csrf_token cookie)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hid a token in my cookie snug,
I hopped and polled when databases dug,
Middleware guards the gate,
Frontend sends the crate,
Tests cheer — the rabbit gives a happy tug!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Staging-->Main' is vague and generic, providing no meaningful information about the actual changes in the pull request. Use a descriptive title that summarizes the main changes, such as 'Add CSRF protection and improve test reliability' or 'Implement CSRF middleware and enhance API retry logic'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch staging

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR syncs staging into main by updating dependency lockfiles and improving Playwright E2E database tests to be more reliable in CI (timeouts, retries, and polling for eventual consistency).

Changes:

  • Bump the frontend app version in package-lock.json and update lockfile metadata.
  • Improve E2E database connection/deletion stability via serial execution, longer timeouts, and API polling/retries.
  • Update Python Playwright dev dependency from 1.57.0 to 1.58.0 (and refresh Pipfile.lock).

Reviewed changes

Copilot reviewed 3 out of 5 changed files in this pull request and generated no comments.

Show a summary per file
File Description
package-lock.json Updates app version metadata and lockfile entries.
e2e/tests/database.spec.ts Makes DB tests serial and more CI-robust via explicit timeouts, retries, and polling.
e2e/logic/api/apiCalls.ts Adds waitForGraphs polling helper and connectDatabaseWithRetry to reduce flaky streaming connect behavior.
Pipfile Bumps Python Playwright dev dependency to ~=1.58.0.
Pipfile.lock Updates lock hashes and Playwright pinned version to ==1.58.0.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
e2e/tests/database.spec.ts (1)

354-361: ⚠️ Potential issue | 🟠 Major

Missing waitForDatabaseConnection before UI interaction — inconsistent with MySQL delete test.

The PostgreSQL delete test creates a new page (Line 355) and immediately interacts with the database selector dropdown (Line 359) without waiting for the UI to reflect the connected database. Compare with the MySQL delete test (Lines 407–411) which correctly calls homePage.waitForDatabaseConnection(90000) before UI interaction.

This omission could cause flaky failures in CI when the UI hasn't finished loading the schema.

Proposed fix
       const homePage = await browser.createNewPage(HomePage, getBaseUrl());
       await browser.setPageToFullScreen();
 
+      // Wait for UI to reflect the connection (increased timeout for schema loading)
+      const connectionEstablished = await homePage.waitForDatabaseConnection(90000);
+      if (!connectionEstablished) {
+        console.log('[PostgreSQL delete] waitForDatabaseConnection timed out');
+      }
+      expect(connectionEstablished).toBeTruthy();
+
       // Delete via UI - open dropdown, click delete, confirm
       await homePage.clickOnDatabaseSelector();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/tests/database.spec.ts` around lines 354 - 361, Add a wait for the UI to
finish connecting before interacting with the database selector: call
homePage.waitForDatabaseConnection(90000) after creating the page (after
browser.createNewPage/HomePage) and before homePage.clickOnDatabaseSelector();
this mirrors the MySQL delete test and prevents flakiness when invoking
homePage.clickOnDatabaseSelector(), homePage.clickOnDeleteGraph(graphId!), and
homePage.clickOnDeleteModalConfirm().
🧹 Nitpick comments (1)
e2e/tests/database.spec.ts (1)

22-290: Consider parameterizing PostgreSQL/MySQL test pairs to reduce duplication.

Each connection mode (API, URL, Manual) has near-identical PostgreSQL and MySQL tests differing only in the connection URL, database type string, and log label. This could be consolidated using a data-driven pattern:

const databases = [
  { name: 'PostgreSQL', type: 'postgresql', url: () => getTestDatabases().postgres },
  { name: 'MySQL', type: 'mysql', url: () => getTestDatabases().mysql },
];

for (const db of databases) {
  test(`connect ${db.name} via API -> verify in UI`, async () => {
    // shared logic using db.type, db.url(), db.name
  });
}

This would halve the test code and make it easier to add new database types in the future.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/tests/database.spec.ts` around lines 22 - 290, Tests for PostgreSQL and
MySQL are duplicated across connection modes; replace the repeated test blocks
with a data-driven loop using a databases array (e.g., const databases = [{
name: 'PostgreSQL', type: 'postgresql', url: () => getTestDatabases().postgres
}, { name: 'MySQL', type: 'mysql', url: () => getTestDatabases().mysql }]) and
iterate to create tests for each mode. For each generated test, reuse the shared
logic that calls apiCall.connectDatabaseWithRetry, apiCall.waitForGraphs, and
the HomePage methods (waitForDatabaseConnection, isDatabaseConnected,
getSelectedDatabaseName, clickOnDatabaseSelector, isDatabaseInList,
clickOnConnectDatabase, selectDatabaseType,
selectConnectionModeUrl/selectConnectionModeManual, enterConnectionUrl,
enterConnectionDetails, clickOnDatabaseModalConnect) while substituting db.type,
db.url(), and db.name for the database-specific values and log labels. Ensure
assertions and timeouts remain the same and preserve the graphId lookup
(graphsList.find(...)) and final expectations. Finally, remove the original
duplicated test blocks so each scenario is only defined once per database via
the loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@e2e/tests/database.spec.ts`:
- Around line 354-361: Add a wait for the UI to finish connecting before
interacting with the database selector: call
homePage.waitForDatabaseConnection(90000) after creating the page (after
browser.createNewPage/HomePage) and before homePage.clickOnDatabaseSelector();
this mirrors the MySQL delete test and prevents flakiness when invoking
homePage.clickOnDatabaseSelector(), homePage.clickOnDeleteGraph(graphId!), and
homePage.clickOnDeleteModalConfirm().

---

Nitpick comments:
In `@e2e/tests/database.spec.ts`:
- Around line 22-290: Tests for PostgreSQL and MySQL are duplicated across
connection modes; replace the repeated test blocks with a data-driven loop using
a databases array (e.g., const databases = [{ name: 'PostgreSQL', type:
'postgresql', url: () => getTestDatabases().postgres }, { name: 'MySQL', type:
'mysql', url: () => getTestDatabases().mysql }]) and iterate to create tests for
each mode. For each generated test, reuse the shared logic that calls
apiCall.connectDatabaseWithRetry, apiCall.waitForGraphs, and the HomePage
methods (waitForDatabaseConnection, isDatabaseConnected,
getSelectedDatabaseName, clickOnDatabaseSelector, isDatabaseInList,
clickOnConnectDatabase, selectDatabaseType,
selectConnectionModeUrl/selectConnectionModeManual, enterConnectionUrl,
enterConnectionDetails, clickOnDatabaseModalConnect) while substituting db.type,
db.url(), and db.name for the database-specific values and log labels. Ensure
assertions and timeouts remain the same and preserve the graphId lookup
(graphsList.find(...)) and final expectations. Finally, remove the original
duplicated test blocks so each scenario is only defined once per database via
the loop.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa7d64c and f859b63.

⛔ Files ignored due to path filters (2)
  • Pipfile.lock is excluded by !**/*.lock
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (3)
  • Pipfile
  • e2e/logic/api/apiCalls.ts
  • e2e/tests/database.spec.ts

Copy link
Copy Markdown

@overcut-ai overcut-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary: 6 MAJOR, 1 MINOR findings. Dependency and lockfile drift currently breaks both Pipenv and npm installs. The new API helper logic swallows underlying HTTP/timeout failures, causing retries to return misleading data and making test flakiness harder to diagnose. Polling now logs full graph payloads, which unnecessarily exposes identifiers in CI logs.

Next steps: (1) Re-align Pipfile/Pipfile.lock and regenerate package-lock.json using the package manager without manual peer flags. (2) Update the API helpers to throw once retries/polling time out and preserve HTTP status/body information. (3) Reduce graph logging verbosity or gate it behind a debug flag to avoid leaking sensitive details.

Comment thread Pipfile
pytest = "~=8.4.2"
pylint = "~=4.0.3"
playwright = "~=1.57.0"
playwright = "~=1.58.0"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: Pipfile still pins pytest-playwright~=0.7.1 but the regenerated lockfile resolves pytest-playwright 0.7.2. Pipenv will refuse to install because the lock hashes no longer match the Pipfile entry, so the Playwright 1.58 upgrade cannot be installed. Keep both files on the same version (ideally 0.7.2 which is the first release that supports Playwright 1.58) and regenerate the lockfile.

Comment thread package-lock.json
"version": "22.19.7",
"dev": true,
"license": "MIT",
"peer": true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: The added "peer": true flags are not produced by npm itself—running npm install immediately rewrites the lock and removes them, leaving the repo dirty and the lockfile non‑deterministic. Please regenerate the lock via the package manager and drop these manual edits so installs do not constantly produce diffs.

Comment thread package-lock.json Outdated
"app/node_modules/react": {
"version": "18.3.1",
"license": "MIT",
"peer": true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: Marking core packages such as React, Tailwind, date-fns, @types/*, etc. with "peer": true means npm will omit them when installs are run with --omit=peer (common for production deploys), causing runtime/build failures because those packages are actually direct deps. They should remain normal dependencies in the lockfile.

Comment thread e2e/logic/api/apiCalls.ts
await new Promise((resolve) => setTimeout(resolve, retryDelayMs));
}
}
console.log(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: connectDatabaseWithRetry catches every exception and, after all attempts fail, just returns the last (possibly empty) message array. Callers now proceed as if they received a response, so tests fail later with vague length/final_result assertions and the original HTTP/parse error is lost. After the last attempt the helper should throw with the most recent error or status so failures surface immediately with actionable context.

Comment thread e2e/logic/api/apiCalls.ts
await new Promise((resolve) =>
setTimeout(resolve, Math.min(pollIntervalMs, remaining))
);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: waitForGraphs always returns the last list even when the predicate is never satisfied, so higher-level tests cannot distinguish a real timeout from a legitimate state (e.g. deleting a graph). This silently masks flakes and makes retries continue forever. The helper should reject once timeoutMs elapses (or resolve with a flag) so callers can fail fast with a clear timeout reason instead of treating an incomplete list as success.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[minor]: When the predicate fails, waitForGraphs dumps every graph id in the console. In CI this leaks real customer database names and can produce multi‑kilobyte logs on every flake. Consider logging only the count or matching ids and gate the verbose dump behind a debug flag.

Comment thread e2e/logic/api/apiCalls.ts
.split("|||FALKORDB_MESSAGE_BOUNDARY|||")
.filter((msg) => msg.trim())
.map((msg) => JSON.parse(msg.trim()));
// Log error messages to help diagnose CI failures
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[major]: parseStreamingResponse now swallows response-level errors: if the body is empty or not JSON, the catch path throws a new Failed to parse error without logging the HTTP status/text. Combined with the new retry helper, this makes debugging very difficult. Preserve the original status/body (or rethrow) so callers can see whether the backend failed vs. the parser itself.

Comment thread e2e/logic/api/apiCalls.ts
setTimeout(resolve, Math.min(pollIntervalMs, remaining))
);
}
console.log(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[minor]: When the predicate fails, waitForGraphs dumps every graph id in the console. In CI this leaks real customer database names and can produce multi‑kilobyte logs on every flake. Consider logging only the count or matching ids and gate the verbose dump behind a debug flag.

Update dependency versions:
- fastapi: ~=0.131.0 → ~=0.133.0
- uvicorn: ~=0.40.0 → ~=0.41.0
- litellm: ~=1.80.9 → ~=1.81.15
- playwright: ~=1.57.0 → ~=1.58.0
- globals (npm): ^15.15.0 → ^17.3.0

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 25, 2026 10:55 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging February 25, 2026 10:55 Inactive
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
e2e/tests/database.spec.ts (1)

354-360: ⚠️ Potential issue | 🟠 Major

PostgreSQL delete test skips waitForDatabaseConnection before UI interaction — unlike the MySQL delete test.

The MySQL delete test (lines 407-411) correctly awaits waitForDatabaseConnection(90000) before opening the database selector, but the PostgreSQL delete test jumps straight to clickOnDatabaseSelector() on a freshly created page without any UI-readiness check. The schema may still be loading at that point, making this test prone to flakiness where the selector dropdown does not yet contain graphId.

🐛 Proposed fix
   const homePage = await browser.createNewPage(HomePage, getBaseUrl());
   await browser.setPageToFullScreen();

+  // Wait for UI to reflect the connection (increased timeout for schema loading)
+  const connectionEstablished = await homePage.waitForDatabaseConnection(90000);
+  if (!connectionEstablished) {
+    console.log('[PostgreSQL delete] waitForDatabaseConnection timed out');
+  }
+  expect(connectionEstablished).toBeTruthy();
+
   // Delete via UI - open dropdown, click delete, confirm
   await homePage.clickOnDatabaseSelector();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/tests/database.spec.ts` around lines 354 - 360, The PostgreSQL delete
test is missing the UI readiness wait used in the MySQL test; before calling
HomePage.clickOnDatabaseSelector() in the block that creates homePage (created
via browser.createNewPage(HomePage, getBaseUrl())), call and await
waitForDatabaseConnection(90000) to ensure the schema and selector are loaded,
then proceed to clickOnDatabaseSelector() and clickOnDeleteGraph(graphId!); this
mirrors the MySQL delete test flow and prevents flakiness.
♻️ Duplicate comments (4)
Pipfile (1)

25-26: ⚠️ Potential issue | 🟠 Major

playwright / pytest-playwright version mismatch is still unresolved.

playwright was bumped to ~=1.58.0 but pytest-playwright remains at ~=0.7.1. pytest-playwright 0.7.2 was released November 24, 2025 while 0.7.1 was released September 8, 2025. Bump pytest-playwright to ~=0.7.2 and regenerate Pipfile.lock to keep hashes consistent.

🐛 Proposed fix
-pytest-playwright = "~=0.7.1"
+pytest-playwright = "~=0.7.2"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Pipfile` around lines 25 - 26, Update the pytest-playwright dependency to
match the Playwright bump by changing "pytest-playwright" from "~=0.7.1" to
"~=0.7.2" in the Pipfile and then regenerate the Pipfile.lock (e.g., run pipenv
lock) so the lockfile hashes and resolved versions are consistent with the
updated "playwright" and "pytest-playwright" entries.
e2e/tests/database.spec.ts (1)

363-368: ⚠️ Potential issue | 🟠 Major

Deletion-completion polling inherits the silent-timeout risk from waitForGraphs.

If the graph is not removed within 30 seconds, waitForGraphs returns the still-populated list and the subsequent expect(graphsList.length).toBe(initialCount - 1) assertion fails with a misleading count diff rather than a clear "deletion timed out" message. Once waitForGraphs is fixed to reject on timeout (the root issue flagged in e2e/logic/api/apiCalls.ts), this will improve automatically.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/tests/database.spec.ts` around lines 363 - 368, The deletion polling here
relies on waitForGraphs; update the test to detect a timeout and fail with a
clear message instead of relying on the final length assertion: after calling
apiCall.waitForGraphs((graphs) => !graphs.some((id) => id === graphId), 30000)
check whether the returned graphsList still contains graphId or whether
waitForGraphs signaled a timeout and, if so, throw an explicit test error like
"deletion of graphId timed out after 30s" (reference graphsList, graphId,
initialCount and the expect(...).toBe(...) assertion) so failures show a clear
timed-out message rather than a misleading count difference.
e2e/logic/api/apiCalls.ts (2)

282-310: ⚠️ Potential issue | 🟠 Major

waitForGraphs silently returns stale data on timeout instead of rejecting.

When the predicate is never satisfied the helper returns the last observed list unchanged, making it impossible for callers to distinguish a successful wait from a timed-out one. Higher-level assertions (e.g., expect(graphsList.length).toBe(initialCount - 1)) then produce vague diff failures with no indication that a timeout occurred.

Additionally, the timeout log at line 307 dumps the full JSON.stringify(lastGraphs) array to stdout on every CI flake, which leaks internal graph/database IDs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 282 - 310, The waitForGraphs helper
currently returns stale lastGraphs on timeout and logs the full JSON, making
timeouts indistinguishable and leaking IDs; update waitForGraphs (the async
function that calls getGraphs with predicate, timeoutMs and pollIntervalMs) to
throw/reject a descriptive Error when the deadline elapses instead of returning
lastGraphs, include in the error a concise, non-sensitive summary (e.g.,
timeoutMs and lastGraphs.length or a sanitized summary) so callers can detect
timeouts, and remove the JSON.stringify(lastGraphs) dump from the timeout log
(or replace it with the non-sensitive summary) to avoid leaking internal IDs.

316-349: ⚠️ Potential issue | 🟠 Major

connectDatabaseWithRetry swallows all errors and returns an empty array after exhaustion.

After all attempts fail, lastMessages may still be []. Callers immediately index into it with messages[messages.length - 1] (e.g., database.spec.ts line 33/85/329/380) and the original error that caused every attempt to fail is never surfaced, making CI failures very hard to diagnose.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 316 - 349, connectDatabaseWithRetry
currently swallows all errors and can return an empty lastMessages array,
causing callers to index into an empty array and losing the original error;
capture and rethrow the last thrown error (or throw a new Error that includes
the last error message and any lastMessages content) after all retries fail.
Specifically, inside connectDatabaseWithRetry add a variable (e.g., lastError)
to store the caught error from each catch block (reference connectDatabase and
parseStreamingResponse where the error originates), and after the retry loop, if
lastMessages is empty or final_result was never returned, throw lastError (or
throw new Error(`connectDatabaseWithRetry failed after ${maxAttempts} attempts:
${lastError?.message} ; lastMessages: ${JSON.stringify(lastMessages)}`)) instead
of returning lastMessages so callers (e.g., database.spec.ts) receive the
original failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 261-268: The current parsing/logging in parseStreamingResponse
logs full StreamMessage objects via JSON.stringify(errorMessages), which can
leak sensitive fields (content, sql_query, data); change the logging to only
emit non-sensitive metadata: map errorMessages to objects containing type and a
short truncated excerpt (e.g., first N chars) of message/text, or completely
omit content unless a CI_VERBOSE or DEBUG env flag is set; update the log site
that references errorMessages in parseStreamingResponse to use this redacted
mapping (reference StreamMessage shape in e2e/logic/api/apiResponses.ts) and
gate verbose dumps behind the env flag.

---

Outside diff comments:
In `@e2e/tests/database.spec.ts`:
- Around line 354-360: The PostgreSQL delete test is missing the UI readiness
wait used in the MySQL test; before calling HomePage.clickOnDatabaseSelector()
in the block that creates homePage (created via browser.createNewPage(HomePage,
getBaseUrl())), call and await waitForDatabaseConnection(90000) to ensure the
schema and selector are loaded, then proceed to clickOnDatabaseSelector() and
clickOnDeleteGraph(graphId!); this mirrors the MySQL delete test flow and
prevents flakiness.

---

Duplicate comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 282-310: The waitForGraphs helper currently returns stale
lastGraphs on timeout and logs the full JSON, making timeouts indistinguishable
and leaking IDs; update waitForGraphs (the async function that calls getGraphs
with predicate, timeoutMs and pollIntervalMs) to throw/reject a descriptive
Error when the deadline elapses instead of returning lastGraphs, include in the
error a concise, non-sensitive summary (e.g., timeoutMs and lastGraphs.length or
a sanitized summary) so callers can detect timeouts, and remove the
JSON.stringify(lastGraphs) dump from the timeout log (or replace it with the
non-sensitive summary) to avoid leaking internal IDs.
- Around line 316-349: connectDatabaseWithRetry currently swallows all errors
and can return an empty lastMessages array, causing callers to index into an
empty array and losing the original error; capture and rethrow the last thrown
error (or throw a new Error that includes the last error message and any
lastMessages content) after all retries fail. Specifically, inside
connectDatabaseWithRetry add a variable (e.g., lastError) to store the caught
error from each catch block (reference connectDatabase and
parseStreamingResponse where the error originates), and after the retry loop, if
lastMessages is empty or final_result was never returned, throw lastError (or
throw new Error(`connectDatabaseWithRetry failed after ${maxAttempts} attempts:
${lastError?.message} ; lastMessages: ${JSON.stringify(lastMessages)}`)) instead
of returning lastMessages so callers (e.g., database.spec.ts) receive the
original failure.

In `@e2e/tests/database.spec.ts`:
- Around line 363-368: The deletion polling here relies on waitForGraphs; update
the test to detect a timeout and fail with a clear message instead of relying on
the final length assertion: after calling apiCall.waitForGraphs((graphs) =>
!graphs.some((id) => id === graphId), 30000) check whether the returned
graphsList still contains graphId or whether waitForGraphs signaled a timeout
and, if so, throw an explicit test error like "deletion of graphId timed out
after 30s" (reference graphsList, graphId, initialCount and the
expect(...).toBe(...) assertion) so failures show a clear timed-out message
rather than a misleading count difference.

In `@Pipfile`:
- Around line 25-26: Update the pytest-playwright dependency to match the
Playwright bump by changing "pytest-playwright" from "~=0.7.1" to "~=0.7.2" in
the Pipfile and then regenerate the Pipfile.lock (e.g., run pipenv lock) so the
lockfile hashes and resolved versions are consistent with the updated
"playwright" and "pytest-playwright" entries.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa7d64c and 2e1f439.

⛔ Files ignored due to path filters (3)
  • Pipfile.lock is excluded by !**/*.lock
  • app/package-lock.json is excluded by !**/package-lock.json
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • Pipfile
  • app/package.json
  • e2e/logic/api/apiCalls.ts
  • e2e/tests/database.spec.ts

Comment thread e2e/logic/api/apiCalls.ts
Comment on lines +261 to +268
// Log error messages to help diagnose CI failures
const errorMessages = messages.filter((m) => m.type === "error");
if (errorMessages.length > 0) {
console.log(
`[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`,
JSON.stringify(errorMessages)
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error-message logging may expose sensitive payload data in CI logs.

JSON.stringify(errorMessages) serializes the full StreamMessage objects, which include content, message, sql_query, data, and other fields that may carry PII or sensitive query results (see e2e/logic/api/apiResponses.ts lines 87-109). Consider logging only the type and—if necessary—a truncated excerpt of message, or gate verbose output behind an environment flag.

♻️ Proposed safer log
-      console.log(
-        `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`,
-        JSON.stringify(errorMessages)
-      );
+      console.log(
+        `[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`,
+        errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) }))
+      );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Log error messages to help diagnose CI failures
const errorMessages = messages.filter((m) => m.type === "error");
if (errorMessages.length > 0) {
console.log(
`[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`,
JSON.stringify(errorMessages)
);
}
// Log error messages to help diagnose CI failures
const errorMessages = messages.filter((m) => m.type === "error");
if (errorMessages.length > 0) {
console.log(
`[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`,
errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) }))
);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 261 - 268, The current
parsing/logging in parseStreamingResponse logs full StreamMessage objects via
JSON.stringify(errorMessages), which can leak sensitive fields (content,
sql_query, data); change the logging to only emit non-sensitive metadata: map
errorMessages to objects containing type and a short truncated excerpt (e.g.,
first N chars) of message/text, or completely omit content unless a CI_VERBOSE
or DEBUG env flag is set; update the log site that references errorMessages in
parseStreamingResponse to use this redacted mapping (reference StreamMessage
shape in e2e/logic/api/apiResponses.ts) and gate verbose dumps behind the env
flag.

* Return generic 400 for RequestValidationError instead of Pydantic details

Add a global RequestValidationError exception handler that returns
{"detail": "Bad request"} with status 400, preventing internal
Pydantic validation details from leaking to clients. This primarily
affects the SPA catch-all proxy route when accessed without the
expected path parameter.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Scope validation handler to SPA catch-all, add logging, fix tests

Address PR review feedback:
- Scope the generic 400 handler to only the SPA catch-all route
  (query._full_path errors) so API consumers still get useful 422
  responses with field-level detail
- Add logging.warning of validation details for server-side debugging
- Make test assertions unconditional instead of guarding behind
  status-code checks
- Add test verifying API routes preserve 422 with field-level info

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix SPA catch-all route parameter name mismatch

The function parameter `_full_path` didn't match the URL template
`{full_path:path}`, causing FastAPI to treat it as a required query
parameter and return 422 for every non-API route.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove validation error handler workaround

The handler was masking a parameter name mismatch in the catch-all
route. Now that the root cause is fixed, the handler, its import,
pylint suppression, and test file are no longer needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Suppress pylint unused-argument for catch-all route parameter

The parameter name must match the URL template to avoid validation
errors, but the function body doesn't use it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 26, 2026 11:41 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging February 26, 2026 11:41 Inactive
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 26, 2026 12:04 Destroyed
… pages

- Add defaultRequestContext field to ApiCalls class, set via constructor
- All API methods now use the default context for auth (session cookies + CSRF)
- Tests use Playwright's request fixture which inherits storageState from config
- Pass storageState path to BrowserWrapper.createNewPage for authenticated browser sessions
- Revert outer test.describe.serial() to test.describe() to prevent cascade failures
  (inner Database Deletion Tests remain serial as needed)

Fixes unauthenticated API requests that caused 401 errors in Firefox E2E tests.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 February 26, 2026 13:09 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging February 26, 2026 13:09 Inactive
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (3)
e2e/logic/api/apiCalls.ts (3)

290-318: ⚠️ Potential issue | 🟠 Major

Timeout path should fail explicitly instead of returning ambiguous data.

When timeout is hit, Line [317] returns lastGraphs as if success. Callers cannot distinguish timeout vs. valid terminal state reliably.

💡 Suggested fix
-    console.log(
-      `[waitForGraphs] timed out after ${timeoutMs}ms. Last graphs: ${JSON.stringify(lastGraphs)}`
-    );
-    return lastGraphs;
+    throw new Error(
+      `[waitForGraphs] timed out after ${timeoutMs}ms. Last observed graphs count: ${lastGraphs.length}`
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 290 - 318, The timeout path in
waitForGraphs currently returns lastGraphs making callers unable to tell if the
predicate succeeded or a timeout occurred; update waitForGraphs (the async
function) to throw an explicit Error when the deadline is exceeded instead of
returning lastGraphs—include contextual details (timeoutMs and a
JSON.stringify(lastGraphs) snapshot) in the error message so callers can
distinguish timeout failures from valid responses while preserving existing
logging.

269-275: ⚠️ Potential issue | 🟡 Minor

Avoid logging full streaming error payloads in CI.

Line [274] serializes full StreamMessage error objects, which can expose sensitive content fields. Log only minimal metadata (type/count, truncated message).

💡 Suggested fix
       if (errorMessages.length > 0) {
         console.log(
-          `[parseStreamingResponse] HTTP status: ${response.status()}, error messages received:`,
-          JSON.stringify(errorMessages)
+          `[parseStreamingResponse] HTTP status: ${response.status()}, ${errorMessages.length} error message(s):`,
+          errorMessages.map((m) => ({ type: m.type, message: m.message?.slice(0, 200) }))
         );
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 269 - 275, The current
parseStreamingResponse logging prints full serialized StreamMessage error
objects which may leak sensitive payloads; update the logging in
parseStreamingResponse to avoid serializing entire messages and instead log
minimal metadata: include response.status(), errorMessages.length, and for each
error message log only its type and a truncated preview of its content (e.g.,
first 100–200 chars) or a single truncated message sample; reference the local
variables messages and errorMessages and replace JSON.stringify(errorMessages)
with constructing a safe summary (type/count + truncated text) before calling
console.log.

324-357: ⚠️ Potential issue | 🟠 Major

Do not swallow terminal connect failures after retries are exhausted.

After the final attempt, Line [356] returns lastMessages instead of throwing, which hides root cause and lets callers continue with invalid assumptions.

💡 Suggested fix
   async connectDatabaseWithRetry(
     connectionUrl: string,
     maxAttempts: number = 3,
     retryDelayMs: number = 3000
   ): Promise<StreamMessage[]> {
     let lastMessages: StreamMessage[] = [];
+    let lastError: Error | undefined;
     for (let attempt = 1; attempt <= maxAttempts; attempt++) {
       try {
         const response = await this.connectDatabase(connectionUrl);
         const messages = await this.parseStreamingResponse(response);
         const finalMessage = messages[messages.length - 1];
         if (finalMessage && finalMessage.type === "final_result") {
           return messages;
         }
         console.log(
           `[connectDatabaseWithRetry] attempt ${attempt}/${maxAttempts} did not return final_result.`,
           `Last message: ${JSON.stringify(finalMessage)}`
         );
         lastMessages = messages;
       } catch (err) {
+        lastError = err as Error;
         console.error(
           `[connectDatabaseWithRetry] attempt ${attempt}/${maxAttempts} threw an error:`,
           (err as Error).message
         );
       }
       if (attempt < maxAttempts) {
         await new Promise((resolve) => setTimeout(resolve, retryDelayMs));
       }
     }
-    console.log(
-      `[connectDatabaseWithRetry] all ${maxAttempts} attempts exhausted. Last messages: ${JSON.stringify(lastMessages)}`
-    );
-    return lastMessages;
+    throw new Error(
+      `[connectDatabaseWithRetry] all ${maxAttempts} attempts exhausted.` +
+      (lastError ? ` Last error: ${lastError.message}` : ` Last messages: ${JSON.stringify(lastMessages)}`)
+    );
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/logic/api/apiCalls.ts` around lines 324 - 357, connectDatabaseWithRetry
currently returns lastMessages after all retries, which swallows terminal
failures; update the function (connectDatabaseWithRetry) to throw a meaningful
Error when no final_result is received after maxAttempts instead of returning
lastMessages. Track the last caught error (e.g., lastError) and include either
lastError.message or a summary of lastMessages in the thrown Error so callers
can handle failures explicitly; ensure the thrown Error contains context like
attempt count and the last response details.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/app_factory.py`:
- Around line 90-94: The current CSRF bypass checks skip protection solely when
an Authorization: Bearer header exists; change the condition so that
Bearer-based exemption only applies when there is no browser session cookie
present. Update the if in the CSRF check (the block using SAFE_METHODS,
EXEMPT_PREFIXES and request.headers.get("authorization")) to also require that
request has no cookies (e.g., request.headers.get("cookie") is empty) or
specifically lacks your session cookie name (e.g., "session" / "sessionid" /
"auth"), so that requests with both a Bearer header and a session cookie do not
bypass CSRF; keep the rest of the logic (SAFE_METHODS, EXEMPT_PREFIXES)
unchanged.
- Around line 61-64: The current check uses exact equality on the raw header
value from request.headers.get("x-forwarded-proto"), which fails for case
variants or comma-separated values; update the logic around the forwarded_proto
variable so you normalize and robustly parse the header (e.g., if
forwarded_proto exists, convert to lowercase, split on commas, strip whitespace
and use the first non-empty token) and then compare that token to "https"; keep
the existing fallback to request.url.scheme == "https" when the header is
missing.

In `@app/src/lib/csrf.ts`:
- Line 28: The call to decodeURIComponent in the get-cookie logic can throw on
malformed cookie values and cause csrfHeaders() to fail; wrap the
decodeURIComponent(match[1]) usage in a safe guard (e.g., try/catch) inside the
function that extracts cookies so that if decoding throws you fall back to the
raw match[1] or an empty string and return that instead, ensuring csrfHeaders()
continues to work; update the function that currently returns
decodeURIComponent(match[1]) to catch decode errors and return a safe fallback.

In `@e2e/infra/api/apiRequests.ts`:
- Around line 22-23: The CSRF cache currently keyed only by csrfCache
(WeakMap<APIRequestContext, string>) must be changed to store tokens per-origin
so contexts reused across origins don't mix tokens: replace csrfCache with a
WeakMap<APIRequestContext, Map<string, string>> (or WeakMap -> Map keyed by
origin string) and update all places that read/write using originOf(url)
(references: csrfCache, originOf(url), and the functions that set/get the token
at the spots noted around lines 29-30 and 37-38) to first get or create the
inner Map for the APIRequestContext and then get/set the token by origin. Ensure
reads use innerMap.get(origin) and writes use innerMap.set(origin, token).
- Around line 11-13: The extracted CSRF cookie value is returned raw from the
regex match (match[1]) which can be URL-encoded; update the extraction to decode
the cookie before returning by applying a URL-decoding step (e.g.,
decodeURIComponent) to the regex capture result so the returned CSRF token used
in headers is the decoded value; locate the cookie extraction logic where
header.match(/csrf_token=([^;]+)/) is used and replace the raw return of
match[1] with the decoded version.

In `@e2e/tests/database.spec.ts`:
- Around line 35-36: Reduce verbose CI logging by redacting full payloads:
replace console.log calls that print objects like finalMessage and graphsList
with logs that print counts, IDs, or truncated summaries (e.g., finalMessage.id,
finalMessage.type, graphsList.length, map of graph IDs) and gate detailed dumps
behind a debug flag; update the console.log statements referencing finalMessage
and graphsList in database.spec.ts (all occurrences noted around the existing
prints) to only output non-sensitive, minimal fields or short substrings.

---

Duplicate comments:
In `@e2e/logic/api/apiCalls.ts`:
- Around line 290-318: The timeout path in waitForGraphs currently returns
lastGraphs making callers unable to tell if the predicate succeeded or a timeout
occurred; update waitForGraphs (the async function) to throw an explicit Error
when the deadline is exceeded instead of returning lastGraphs—include contextual
details (timeoutMs and a JSON.stringify(lastGraphs) snapshot) in the error
message so callers can distinguish timeout failures from valid responses while
preserving existing logging.
- Around line 269-275: The current parseStreamingResponse logging prints full
serialized StreamMessage error objects which may leak sensitive payloads; update
the logging in parseStreamingResponse to avoid serializing entire messages and
instead log minimal metadata: include response.status(), errorMessages.length,
and for each error message log only its type and a truncated preview of its
content (e.g., first 100–200 chars) or a single truncated message sample;
reference the local variables messages and errorMessages and replace
JSON.stringify(errorMessages) with constructing a safe summary (type/count +
truncated text) before calling console.log.
- Around line 324-357: connectDatabaseWithRetry currently returns lastMessages
after all retries, which swallows terminal failures; update the function
(connectDatabaseWithRetry) to throw a meaningful Error when no final_result is
received after maxAttempts instead of returning lastMessages. Track the last
caught error (e.g., lastError) and include either lastError.message or a summary
of lastMessages in the thrown Error so callers can handle failures explicitly;
ensure the thrown Error contains context like attempt count and the last
response details.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2341d4e and 75d5ce3.

⛔ Files ignored due to path filters (2)
  • app/package-lock.json is excluded by !**/package-lock.json
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (14)
  • api/app_factory.py
  • app/package.json
  • app/src/components/modals/DatabaseModal.tsx
  • app/src/lib/csrf.ts
  • app/src/pages/Index.tsx
  • app/src/services/auth.ts
  • app/src/services/chat.ts
  • app/src/services/database.ts
  • app/src/services/tokens.ts
  • e2e/infra/api/apiRequests.ts
  • e2e/logic/api/apiCalls.ts
  • e2e/tests/chat.spec.ts
  • e2e/tests/database.spec.ts
  • tests/test_csrf_middleware.py
✅ Files skipped from review due to trivial changes (1)
  • tests/test_csrf_middleware.py

Comment thread api/app_factory.py
Comment on lines +61 to +64
forwarded_proto = request.headers.get("x-forwarded-proto")
if forwarded_proto:
return forwarded_proto == "https"
return request.url.scheme == "https"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden forwarded-proto parsing for secure-cookie decisions

x-forwarded-proto is often case-variant or comma-separated. Exact equality can mis-detect HTTPS and set CSRF cookies with the wrong secure attribute.

💡 Proposed fix
 def _is_secure_request(request: Request) -> bool:
     """Determine if the request is over HTTPS."""
     forwarded_proto = request.headers.get("x-forwarded-proto")
     if forwarded_proto:
-        return forwarded_proto == "https"
+        proto = forwarded_proto.split(",")[0].strip().lower()
+        return proto == "https"
     return request.url.scheme == "https"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/app_factory.py` around lines 61 - 64, The current check uses exact
equality on the raw header value from request.headers.get("x-forwarded-proto"),
which fails for case variants or comma-separated values; update the logic around
the forwarded_proto variable so you normalize and robustly parse the header
(e.g., if forwarded_proto exists, convert to lowercase, split on commas, strip
whitespace and use the first non-empty token) and then compare that token to
"https"; keep the existing fallback to request.url.scheme == "https" when the
header is missing.

Comment thread api/app_factory.py
Comment on lines +90 to +94
if (
request.method not in self.SAFE_METHODS
and not request.url.path.startswith(self.EXEMPT_PREFIXES)
and not request.headers.get("authorization", "").lower().startswith("bearer ")
):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid CSRF bypass based only on Authorization: Bearer presence

Skipping CSRF solely because a Bearer header exists allows unsafe requests to bypass CSRF checks even when session cookies are present. This weakens browser-session protection.

💡 Proposed fix
-        if (
+        auth_header = request.headers.get("authorization", "")
+        is_bearer_request = auth_header.lower().startswith("bearer ")
+        has_session_cookie = "session" in request.cookies
+
+        if (
             request.method not in self.SAFE_METHODS
             and not request.url.path.startswith(self.EXEMPT_PREFIXES)
-            and not request.headers.get("authorization", "").lower().startswith("bearer ")
+            and not (is_bearer_request and not has_session_cookie)
         ):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (
request.method not in self.SAFE_METHODS
and not request.url.path.startswith(self.EXEMPT_PREFIXES)
and not request.headers.get("authorization", "").lower().startswith("bearer ")
):
auth_header = request.headers.get("authorization", "")
is_bearer_request = auth_header.lower().startswith("bearer ")
has_session_cookie = "session" in request.cookies
if (
request.method not in self.SAFE_METHODS
and not request.url.path.startswith(self.EXEMPT_PREFIXES)
and not (is_bearer_request and not has_session_cookie)
):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/app_factory.py` around lines 90 - 94, The current CSRF bypass checks skip
protection solely when an Authorization: Bearer header exists; change the
condition so that Bearer-based exemption only applies when there is no browser
session cookie present. Update the if in the CSRF check (the block using
SAFE_METHODS, EXEMPT_PREFIXES and request.headers.get("authorization")) to also
require that request has no cookies (e.g., request.headers.get("cookie") is
empty) or specifically lacks your session cookie name (e.g., "session" /
"sessionid" / "auth"), so that requests with both a Bearer header and a session
cookie do not bypass CSRF; keep the rest of the logic (SAFE_METHODS,
EXEMPT_PREFIXES) unchanged.

Comment thread app/src/lib/csrf.ts
);
return '';
}
return decodeURIComponent(match[1]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard cookie decode to prevent request-path crashes

decodeURIComponent can throw on malformed cookie values, which would make csrfHeaders() throw and break state-changing calls unexpectedly.

💡 Proposed fix
-  return decodeURIComponent(match[1]);
+  try {
+    return decodeURIComponent(match[1]);
+  } catch {
+    console.debug(
+      `CSRF token cookie "${CSRF_COOKIE_NAME}" is malformed. Ignoring token.`
+    );
+    return '';
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return decodeURIComponent(match[1]);
try {
return decodeURIComponent(match[1]);
} catch {
console.debug(
`CSRF token cookie "${CSRF_COOKIE_NAME}" is malformed. Ignoring token.`
);
return '';
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/lib/csrf.ts` at line 28, The call to decodeURIComponent in the
get-cookie logic can throw on malformed cookie values and cause csrfHeaders() to
fail; wrap the decodeURIComponent(match[1]) usage in a safe guard (e.g.,
try/catch) inside the function that extracts cookies so that if decoding throws
you fall back to the raw match[1] or an empty string and return that instead,
ensuring csrfHeaders() continues to work; update the function that currently
returns decodeURIComponent(match[1]) to catch decode errors and return a safe
fallback.

Comment on lines +11 to +13
const match = header.match(/csrf_token=([^;]+)/);
if (match) return match[1];
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Decode extracted CSRF cookie values before header use.

Line [12] returns the raw cookie value. If the cookie is encoded, this can produce header/token mismatches.

💡 Suggested fix
 function extractCsrfToken(setCookieHeaders: string[]): string | undefined {
   for (const header of setCookieHeaders) {
     const match = header.match(/csrf_token=([^;]+)/);
-    if (match) return match[1];
+    if (match) return decodeURIComponent(match[1]);
   }
   return undefined;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const match = header.match(/csrf_token=([^;]+)/);
if (match) return match[1];
}
const match = header.match(/csrf_token=([^;]+)/);
if (match) return decodeURIComponent(match[1]);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/infra/api/apiRequests.ts` around lines 11 - 13, The extracted CSRF cookie
value is returned raw from the regex match (match[1]) which can be URL-encoded;
update the extraction to decode the cookie before returning by applying a
URL-decoding step (e.g., decodeURIComponent) to the regex capture result so the
returned CSRF token used in headers is the decoded value; locate the cookie
extraction logic where header.match(/csrf_token=([^;]+)/) is used and replace
the raw return of match[1] with the decoded version.

Comment on lines +22 to +23
const csrfCache = new WeakMap<APIRequestContext, string>();

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Scope CSRF cache by origin, not only by request context.

Line [22] caches one token per APIRequestContext, but token retrieval is origin-based (originOf(url)). Reusing one context across multiple origins can send the wrong CSRF token.

💡 Suggested fix
-const csrfCache = new WeakMap<APIRequestContext, string>();
+const csrfCache = new WeakMap<APIRequestContext, Map<string, string>>();

 async function getCsrfToken(baseUrl: string, ctx: APIRequestContext): Promise<string | undefined> {
-  const cached = csrfCache.get(ctx);
-  if (cached) return cached;
+  let perOrigin = csrfCache.get(ctx);
+  if (!perOrigin) {
+    perOrigin = new Map<string, string>();
+    csrfCache.set(ctx, perOrigin);
+  }
+
+  const cached = perOrigin.get(baseUrl);
+  if (cached) return cached;

   const seedResp = await ctx.get(`${baseUrl}/auth-status`);
   const setCookies = seedResp.headersArray()
     .filter(h => h.name.toLowerCase() === 'set-cookie')
     .map(h => h.value);
   const token = extractCsrfToken(setCookies);
-  if (token) csrfCache.set(ctx, token);
+  if (token) perOrigin.set(baseUrl, token);
   return token;
 }

Also applies to: 29-30, 37-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/infra/api/apiRequests.ts` around lines 22 - 23, The CSRF cache currently
keyed only by csrfCache (WeakMap<APIRequestContext, string>) must be changed to
store tokens per-origin so contexts reused across origins don't mix tokens:
replace csrfCache with a WeakMap<APIRequestContext, Map<string, string>> (or
WeakMap -> Map keyed by origin string) and update all places that read/write
using originOf(url) (references: csrfCache, originOf(url), and the functions
that set/get the token at the spots noted around lines 29-30 and 37-38) to first
get or create the inner Map for the APIRequestContext and then get/set the token
by origin. Ensure reads use innerMap.get(origin) and writes use
innerMap.set(origin, token).

Comment on lines +35 to +36
console.log(`[PostgreSQL API connect] unexpected final message: ${JSON.stringify(finalMessage)}`);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Redact verbose CI logs to avoid leaking runtime data.

These lines log full finalMessage/graphsList payloads. Prefer counts, IDs only, or truncated fields behind a debug flag.

💡 Suggested pattern
- console.log(`[PostgreSQL API connect] unexpected final message: ${JSON.stringify(finalMessage)}`);
+ console.log(
+   `[PostgreSQL API connect] unexpected final message type=${finalMessage.type}, success=${Boolean(finalMessage.success)}`
+ );

- console.log(`[PostgreSQL API connect] graphs after connection: ${JSON.stringify(graphsList)}`);
+ console.log(`[PostgreSQL API connect] graph count after connection: ${graphsList.length}`);

Also applies to: 48-48, 87-88, 100-100, 154-154, 193-193, 237-237, 281-281, 331-332, 347-348, 382-383, 396-397

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/tests/database.spec.ts` around lines 35 - 36, Reduce verbose CI logging
by redacting full payloads: replace console.log calls that print objects like
finalMessage and graphsList with logs that print counts, IDs, or truncated
summaries (e.g., finalMessage.id, finalMessage.type, graphsList.length, map of
graph IDs) and gate detailed dumps behind a debug flag; update the console.log
statements referencing finalMessage and graphsList in database.spec.ts (all
occurrences noted around the existing prints) to only output non-sensitive,
minimal fields or short substrings.

* Bump fastapi from 0.133.0 to 0.135.0

Bumps [fastapi](https://github.com/fastapi/fastapi) from 0.133.0 to 0.135.0.
- [Release notes](https://github.com/fastapi/fastapi/releases)
- [Commits](fastapi/fastapi@0.133.0...0.135.0)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-version: 0.135.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix(e2e): read CSRF token from storageState when Set-Cookie is absent

When the Playwright request fixture is initialised from a storageState
that already carries a csrf_token cookie, the server does not emit a
new Set-Cookie header.  getCsrfToken() would then return undefined,
causing every state-changing API call to fail with 403 'CSRF token
missing or invalid'.

Fall back to reading the token from the context's storageState() when
the Set-Cookie header does not contain it.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 09:13 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 09:13 Inactive
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
e2e/infra/api/apiRequests.ts (2)

9-13: ⚠️ Potential issue | 🟡 Minor

Decode CSRF cookie values before using them as header tokens.

extractCsrfToken returns the raw cookie capture. If the cookie is URL-encoded, header/token matching can fail.

💡 Proposed fix
 function extractCsrfToken(setCookieHeaders: string[]): string | undefined {
   for (const header of setCookieHeaders) {
     const match = header.match(/csrf_token=([^;]+)/);
-    if (match) return match[1];
+    if (match) return decodeURIComponent(match[1]);
   }
   return undefined;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/infra/api/apiRequests.ts` around lines 9 - 13, extractCsrfToken currently
returns the raw capture from Set-Cookie which may be URL-encoded; update the
function (extractCsrfToken and its use sites) to decode the cookie value before
returning by applying decodeURIComponent to the matched group (and still return
undefined if no match) so header/token comparisons receive the decoded CSRF
token string.

22-23: ⚠️ Potential issue | 🟠 Major

Scope CSRF cache per origin, not only per request context.

A single token per APIRequestContext is unsafe when one context talks to multiple origins.

💡 Proposed fix
-const csrfCache = new WeakMap<APIRequestContext, string>();
+const csrfCache = new WeakMap<APIRequestContext, Map<string, string>>();

 async function getCsrfToken(baseUrl: string, ctx: APIRequestContext): Promise<string | undefined> {
-  const cached = csrfCache.get(ctx);
+  let perOrigin = csrfCache.get(ctx);
+  if (!perOrigin) {
+    perOrigin = new Map<string, string>();
+    csrfCache.set(ctx, perOrigin);
+  }
+
+  const cached = perOrigin.get(baseUrl);
   if (cached) return cached;
 ...
-  if (token) csrfCache.set(ctx, token);
+  if (token) perOrigin.set(baseUrl, token);
   return token;
 }

Also applies to: 34-35, 51-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/infra/api/apiRequests.ts` around lines 22 - 23, The csrfCache currently
maps only APIRequestContext -> string which is unsafe across multiple origins;
change it to map APIRequestContext -> Map<string, string> (or
WeakMap<APIRequestContext, Map<string,string>>) so tokens are stored per-origin.
Update reads/writes that reference csrfCache (the get/set logic around
csrfCache) to accept an origin key (e.g., request.url().origin or derived host)
and use csrfCache.get(ctx)?.get(origin) and csrfCache.get(ctx)?.set(origin,
token), and initialize a new inner Map when missing; adjust all usages flagged
(the csrfCache variable and the access sites around the blocks noted at lines
22-23, 34-35, and 51-52) to use this per-origin inner map.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@e2e/infra/api/apiRequests.ts`:
- Around line 45-48: The fallback lookup for the CSRF token finds the first
cookie named 'csrf_token' from ctx.storageState() without checking its origin,
which can pick a token for the wrong domain; update the logic that sets token
(the block that defines state, existing and reads state.cookies.find(c => c.name
=== 'csrf_token')) to filter cookies by the target request origin/domain
(compare cookie.domain or cookie.origin to the test's base/origin/target URL)
before selecting the cookie so only a cookie matching the request's origin is
used as the csrf_token.

---

Duplicate comments:
In `@e2e/infra/api/apiRequests.ts`:
- Around line 9-13: extractCsrfToken currently returns the raw capture from
Set-Cookie which may be URL-encoded; update the function (extractCsrfToken and
its use sites) to decode the cookie value before returning by applying
decodeURIComponent to the matched group (and still return undefined if no match)
so header/token comparisons receive the decoded CSRF token string.
- Around line 22-23: The csrfCache currently maps only APIRequestContext ->
string which is unsafe across multiple origins; change it to map
APIRequestContext -> Map<string, string> (or WeakMap<APIRequestContext,
Map<string,string>>) so tokens are stored per-origin. Update reads/writes that
reference csrfCache (the get/set logic around csrfCache) to accept an origin key
(e.g., request.url().origin or derived host) and use
csrfCache.get(ctx)?.get(origin) and csrfCache.get(ctx)?.set(origin, token), and
initialize a new inner Map when missing; adjust all usages flagged (the
csrfCache variable and the access sites around the blocks noted at lines 22-23,
34-35, and 51-52) to use this per-origin inner map.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 75d5ce3 and a7b39e6.

⛔ Files ignored due to path filters (1)
  • Pipfile.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • Pipfile
  • e2e/infra/api/apiRequests.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • Pipfile

Comment on lines +45 to +48
if (!token) {
const state = await ctx.storageState();
const existing = state.cookies.find(c => c.name === 'csrf_token');
if (existing) token = existing.value;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Match fallback csrf_token cookie to the target origin.

The storageState fallback picks the first cookie named csrf_token without domain filtering, which can select the wrong token in multi-origin contexts.

💡 Proposed fix
   if (!token) {
     const state = await ctx.storageState();
-    const existing = state.cookies.find(c => c.name === 'csrf_token');
+    const { hostname } = new URL(baseUrl);
+    const existing = state.cookies.find(c => {
+      if (c.name !== 'csrf_token') return false;
+      const domain = c.domain.replace(/^\./, '');
+      return hostname === domain || hostname.endsWith(`.${domain}`);
+    });
     if (existing) token = existing.value;
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!token) {
const state = await ctx.storageState();
const existing = state.cookies.find(c => c.name === 'csrf_token');
if (existing) token = existing.value;
if (!token) {
const state = await ctx.storageState();
const { hostname } = new URL(baseUrl);
const existing = state.cookies.find(c => {
if (c.name !== 'csrf_token') return false;
const domain = c.domain.replace(/^\./, '');
return hostname === domain || hostname.endsWith(`.${domain}`);
});
if (existing) token = existing.value;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/infra/api/apiRequests.ts` around lines 45 - 48, The fallback lookup for
the CSRF token finds the first cookie named 'csrf_token' from ctx.storageState()
without checking its origin, which can pick a token for the wrong domain; update
the logic that sets token (the block that defines state, existing and reads
state.cookies.find(c => c.name === 'csrf_token')) to filter cookies by the
target request origin/domain (compare cookie.domain or cookie.origin to the
test's base/origin/target URL) before selecting the cookie so only a cookie
matching the request's origin is used as the csrf_token.

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](actions/upload-artifact@v6...v7)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 09:25 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 09:25 Inactive
* Bump the npm-minor-patch group in /app with 5 updates

Bumps the npm-minor-patch group in /app with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [@falkordb/canvas](https://github.com/FalkorDB/falkordb-canvas) | `0.0.40` | `0.0.41` |
| [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) | `5.90.19` | `5.90.21` |
| [preact](https://github.com/preactjs/preact) | `10.28.3` | `10.28.4` |
| [react-hook-form](https://github.com/react-hook-form/react-hook-form) | `7.71.1` | `7.71.2` |
| [autoprefixer](https://github.com/postcss/autoprefixer) | `10.4.23` | `10.4.27` |


Updates `@falkordb/canvas` from 0.0.40 to 0.0.41
- [Release notes](https://github.com/FalkorDB/falkordb-canvas/releases)
- [Commits](FalkorDB/falkordb-canvas@v0.0.40...v0.0.41)

Updates `@tanstack/react-query` from 5.90.19 to 5.90.21
- [Release notes](https://github.com/TanStack/query/releases)
- [Changelog](https://github.com/TanStack/query/blob/main/packages/react-query/CHANGELOG.md)
- [Commits](https://github.com/TanStack/query/commits/@tanstack/react-query@5.90.21/packages/react-query)

Updates `preact` from 10.28.3 to 10.28.4
- [Release notes](https://github.com/preactjs/preact/releases)
- [Commits](preactjs/preact@10.28.3...10.28.4)

Updates `react-hook-form` from 7.71.1 to 7.71.2
- [Release notes](https://github.com/react-hook-form/react-hook-form/releases)
- [Changelog](https://github.com/react-hook-form/react-hook-form/blob/master/CHANGELOG.md)
- [Commits](react-hook-form/react-hook-form@v7.71.1...v7.71.2)

Updates `autoprefixer` from 10.4.23 to 10.4.27
- [Release notes](https://github.com/postcss/autoprefixer/releases)
- [Changelog](https://github.com/postcss/autoprefixer/blob/main/CHANGELOG.md)
- [Commits](postcss/autoprefixer@10.4.23...10.4.27)

---
updated-dependencies:
- dependency-name: "@falkordb/canvas"
  dependency-version: 0.0.41
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm-minor-patch
- dependency-name: "@tanstack/react-query"
  dependency-version: 5.90.21
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm-minor-patch
- dependency-name: preact
  dependency-version: 10.28.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm-minor-patch
- dependency-name: react-hook-form
  dependency-version: 7.71.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm-minor-patch
- dependency-name: autoprefixer
  dependency-version: 10.4.27
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm-minor-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update root package-lock.json for app dependency bumps

The root package-lock.json must be kept in sync with app/package.json
changes since root package.json references app via file: protocol.
Without this update, npm ci at the root fails with lockfile mismatch.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* perf(ci): accelerate Playwright CI from ~16min to ~5min

- Increase CI workers from 1 to 4 (matches ubuntu-latest vCPUs)
- Skip Firefox in CI, run Chromium only (halves test count)
- Reduce retries from 2 to 1 (still catches transient failures)
- Add pip, npm, and Playwright browser caching
- Replace hardcoded sleep 20 with health-check polling
- Install only Chromium browser (not Firefox) in CI

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix(ci): fix YAML indentation and use docker compose --wait

Replace inline Python health-check with docker compose --wait flag
which natively waits for service healthchecks to pass.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix(ci): remove pip cache (incompatible with pipenv setup)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix(ci): keep 2 retries for flaky AI-dependent chat tests

Chat tests that interact with the AI processing endpoint need 2 retries
to handle intermittent timeouts, especially under parallel execution.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix(ci): key npm cache on both root and app lockfiles

The setup-node npm cache was only keyed on the root package-lock.json.
Add cache-dependency-path to include app/package-lock.json so the cache
invalidates when frontend dependencies change.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix(ci): add pip caching with Pipfile.lock dependency path

The setup-python cache: 'pip' was removed earlier because it failed
without cache-dependency-path (defaults to requirements*.txt). Re-add
it with cache-dependency-path: Pipfile.lock so pip downloads are cached
between runs.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: update comment to reflect hard-coded worker count

The comment said 'Use all available vCPUs' but the config hard-codes 4
workers. Update to accurately describe the intentional pinning.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 11:22 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 11:22 Inactive
Bumps [litellm](https://github.com/BerriAI/litellm) from 1.81.15 to 1.82.0.
- [Release notes](https://github.com/BerriAI/litellm/releases)
- [Commits](https://github.com/BerriAI/litellm/commits)

---
updated-dependencies:
- dependency-name: litellm
  dependency-version: 1.82.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 11:23 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 11:23 Inactive
* Bump the npm_and_yarn group across 1 directory with 2 updates

Bumps the npm_and_yarn group with 2 updates in the /app directory: [minimatch](https://github.com/isaacs/minimatch) and [rollup](https://github.com/rollup/rollup).


Updates `minimatch` from 3.1.2 to 3.1.5
- [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md)
- [Commits](isaacs/minimatch@v3.1.2...v3.1.5)

Updates `rollup` from 4.55.1 to 4.59.0
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](rollup/rollup@v4.55.1...v4.59.0)

---
updated-dependencies:
- dependency-name: minimatch
  dependency-version: 3.1.5
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: rollup
  dependency-version: 4.59.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* ci: retrigger CI after transient test failure

The previous Playwright test run had database connectivity issues in CI
(Docker container readiness timing). All infrastructure steps passed but
database connection tests returned success:false. Retriggering to verify.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 11:29 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 11:29 Inactive
* Initial plan

* chore: bump version from 0.0.14 to 0.1.0

Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: gkorland <753206+gkorland@users.noreply.github.com>
Co-authored-by: Guy Korland <gkorland@gmail.com>
@railway-app railway-app bot temporarily deployed to queryweaver / QueryWeaver-pr-438 March 2, 2026 12:04 Destroyed
@railway-app railway-app bot temporarily deployed to queryweaver / staging March 2, 2026 12:04 Inactive
@gkorland gkorland merged commit 87e7a3c into main Mar 2, 2026
19 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Mar 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants