Conversation
add toolbar to mobile
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughProject-wide migration to a structured logging system via a centralized get_logger, replacing standard logging across API modules. Adds structlog dependency. Refactors query_graph streaming flow with stepwise execution, timeout handling, destructive-operation confirmation, and schema refresh events. Frontend introduces a collapsible left toolbar with a burger toggle, mobile-responsive CSS, and a small JS API. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant API as API (query_graph)
participant LLM as Relevancy/SQL Agents
participant DB as DB Loader
participant RF as ResponseFormatterAgent
participant S as Event Stream
C->>API: POST /graphs/{id}/query (stream)
API-->>S: stream: step_started (analyze query)
API->>LLM: Find relevant tables/columns (async via thread)
alt completes <= 120s
LLM-->>API: Relevant tables/columns
API-->>S: stream: step_done (analyze, seconds)
else timeout
API-->>S: stream: error (timeout)
API-->>C: terminate stream
end
API-->>S: stream: step_started (generate SQL)
API->>LLM: Propose SQL for query
LLM-->>API: SQL + op_type
alt destructive op
API-->>S: stream: destructive_confirmation {sql_query, operation_type}
note over C,API: Client must confirm via separate endpoint
end
API-->>S: stream: step_done (sql, seconds)
API-->>S: stream: step_started (execute SQL)
API->>DB: execute_sql_query(sql)
DB-->>API: rows | error
API-->>S: stream: step_done (execution, seconds)
opt schema-modifying SQL
API-->>S: stream: schema_refresh: started
API->>DB: refresh_graph_schema()
DB-->>API: ok | error
API-->>S: stream: schema_refresh: {status}
end
API-->>S: stream: step_started (format AI response)
API->>RF: format(rows, context)
RF-->>API: ai_response
API-->>S: stream: ai_response
API-->>C: stream complete
sequenceDiagram
autonumber
participant U as User
participant BTN as Burger Button
participant LT as Left Toolbar Script
participant DOM as Document/Body
U->>BTN: click / pointerdown
BTN->>LT: toggle()
alt opening
LT->>DOM: remove .collapsed on #left-toolbar
LT->>DOM: add .left-toolbar-open on body
LT->>BTN: set aria-expanded="true" title="Close toolbar"
else closing
LT->>DOM: add .collapsed on #left-toolbar
LT->>DOM: remove .left-toolbar-open on body
LT->>BTN: set aria-expanded="false" title="Open toolbar"
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Dependency ReviewThe following issues were found:
License IssuesPipfile
OpenSSF Scorecard
Scanned Files
|
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
api/loaders/mysql_loader.py (1)
519-525: Runtime bug: invalid PyMySQL cursor usage (dictionary=True).
pymysql.connect().cursor()doesn’t acceptdictionary=True; this will raise aTypeError. UseDictCursor(imported above) or passcursorclass=DictCursortoconnect().Apply one of the fixes (option A preferred):
- conn = pymysql.connect(**conn_params) - cursor = conn.cursor(dictionary=True) + conn = pymysql.connect(**conn_params, cursorclass=DictCursor) + cursor = conn.cursor()Or:
- conn = pymysql.connect(**conn_params) - cursor = conn.cursor(dictionary=True) + conn = pymysql.connect(**conn_params) + cursor = conn.cursor(DictCursor)api/routes/graphs.py (1)
51-60: Mutable default in Pydantic modelConfirmRequest.chat defaults to [] which is mutable. Prefer Field(default_factory=list) to avoid shared state across instances.
-from pydantic import BaseModel +from pydantic import BaseModel, Field @@ class ConfirmRequest(BaseModel): @@ - chat: list = [] + chat: list = Field(default_factory=list)
♻️ Duplicate comments (3)
api/graph.py (1)
10-14: Import will fail until logging_config is added.This shares the same root cause as oauth_handlers.py. Once
api/logging_config.pyis added, this import path is fine.After adding the module, re-run Pylint on this file to ensure E0401/E0611 clear.
api/loaders/mysql_loader.py (1)
12-16: Same import error as elsewhere—needs api/logging_config present.Once the logging config module is added, this resolves.
api/routes/auth.py (1)
15-23: Same import dependency—requires api/logging_config module.This will pass once
api/logging_config.pyis added.
🧹 Nitpick comments (26)
api/auth/oauth_handlers.py (1)
35-36: Consider adding minimal context fields.Include
provider_user_id(if present) andhas_email=bool(email)rather than raw email to avoid PII while keeping diagnostics useful.Example:
- logger.error("Missing required fields from Google OAuth response") + logger.error("Missing required fields from Google OAuth response", + has_email=bool(email), provider_user_id=user_id is not None)Also applies to: 63-64
api/graph.py (1)
106-109: Minor: enrich with counts for quick triage.Counts are cheap and useful in logs; avoids dumping large arrays unless needed.
- logger.info("Extracting tables by sphere") + logger.info("Extracting tables by sphere", base_tables_count=len(base_tables_names)) ... - logger.info("Extracting tables by connecting routes", tables=base_tables_names) + logger.info("Extracting tables by connecting routes", + tables_count=len(base_tables_names))api/loaders/mysql_loader.py (3)
417-421: Nit: tighten the log message.Message ends with “for” and relies on a field to complete the sentence. Prefer a self-contained message + fields.
- logger.info( - "Schema modification detected. Refreshing graph schema for", - graph_id=graph_id, - ) + logger.info("Schema modification detected; refreshing graph schema", + graph_id=graph_id)
443-446: Avoid duplicate error logs; keep one structured error.Currently you log the failure, then log a generic error again. Keep a single, structured log with full context.
- if success: - logger.info("Graph schema refreshed successfully.") - return True, message - logger.error("Schema refresh failed for graph", graph_id=graph_id, error=message) - return False, "Failed to reload schema" + if success: + logger.info("Graph schema refreshed successfully", graph_id=graph_id) + return True, message + logger.error("Schema refresh failed", graph_id=graph_id, error=message) + return False, "Failed to reload schema" @@ - except Exception as e: - # Log the error and return failure - logger.error("Error refreshing graph schema", error=str(e)) - error_msg = "Error refreshing graph schema" - logger.error(error_msg) - return False, error_msg + except Exception: + # Log the error and return failure + logger.exception("Error refreshing graph schema", graph_id=graph_id) + return False, "Error refreshing graph schema"Also applies to: 450-453
125-136: URL parser is strict; consider accepting “mysql+pymysql://”.A lot of clients use SQLAlchemy-style URLs. Optionally accept and normalize
mysql+pymysql://tomysql://.Example:
- if connection_url.startswith('mysql://'): - url = connection_url[8:] + for prefix in ('mysql://', 'mysql+pymysql://'): + if connection_url.startswith(prefix): + url = connection_url[len(prefix):] + break + else: + raise ValueError( + "Invalid MySQL URL format. Expected " + "mysql://username:password@host:port/database" + )api/routes/auth.py (2)
108-111: Minor: consolidate warning strings and add a hint URL when possible.No functional issue. Consider including
request.base_urlas a field and tightening text.- logger.warning( - "OAUTH_BASE_URL not set and base URL is 127.0.0.1; set OAUTH_BASE_URL to" - " avoid redirect mismatch." - ) + logger.warning("OAUTH_BASE_URL not set; redirect may mismatch", + base_url=str(request.base_url))Also applies to: 168-172
270-271: Log field, not prose.Make this informational log more queryable by avoiding a full sentence.
- logger.info("GitHub token found, clearing from session (no remote revoke available).") + logger.info("Clearing GitHub token from session", remote_revoke=False)api/auth/user_management.py (3)
36-36: PII in logs: consider redacting emailLogging raw emails can be sensitive. Prefer masking (e.g.,
a***@domain.com) or hashing.I can add a tiny helper, e.g.,
mask_email(email), and useemail=mask_email(email)in logs.
133-139: Brittle new-identity detection
identity.created_at = identity.last_login AS is_new_identityrelies on millisecond equality of two separate timestamp() calls. This can flip from True→False even on first creation.Prefer a property set once, e.g.,
ON CREATE SET identity.first_seen = timestamp(), identity.last_login = identity.first_seen ON MATCH SET identity.last_login = timestamp() RETURN identity, user, identity.first_seen = identity.last_login AS is_new_identity, ...I can adjust the query to set and use
first_seen.
253-256: GitHub/Google OAuth: broad except blocksCatching bare
Exceptionhides root causes and triggers Pylint W0718. Consider narrowing (e.g.,OAuthError) or re-raising after logging.I can propose targeted exceptions based on
authlibtypes if desired.app/public/css/chat-components.css (1)
31-48: Mobile tweaks: good; verify overlap with responsive.cssThe padding, overflow-x guard, and box-sizing are helpful. The
transition: margin-left 220mson.chat-containerassumes something togglesmargin-left; per responsive.css, the shift is onbody.left-toolbar-open #container. If.chat-containernever changes its margin-left, the transition won’t animate.
- Either move the transition to the element that actually changes margin-left, or set a CSS custom property and reuse across files for consistency (e.g.,
--toolbar-slide-duration: 220ms).Also confirm
.chat-containerexists in templates. If not, prefer targeting the actual container ID/class used.I can scan the repo and adjust selectors if you want.
api/app_factory.py (2)
58-58: Session secret: warn in dev, enforce in prodWarning is good. Consider deriving
https_onlyand cookie flags from an env var to avoid insecure defaults in production.Example:
- same_site="lax", - https_only=False, + same_site=os.getenv("SESSION_SAMESITE", "lax"), + https_only=os.getenv("ENV", "dev") == "prod",
90-94: Exception handler captures all ExceptionsCatching
Exceptionat the app level and branching on substrings (“token”/“oauth”) can misroute non-OAuth failures whose messages happen to contain those words.Prefer handling known OAuth exceptions explicitly (e.g.,
OAuthError) via a dedicated handler, and let the default handler process others.api/routes/database.py (1)
84-85: Avoid logging the entire loader error payload
resultmay contain sensitive connection details (host/user/db). Log a code or a generic message instead.-logger.error("Database loader failed", error=str(result)) +logger.error("Database loader failed", reason="loader_failed")If you want structured, non-PII diagnostics, we can have loaders return machine-friendly error codes.
api/loaders/postgres_loader.py (1)
405-415: Double-logging on error pathYou log an error with details, then log a second generic error. Drop the second to avoid noise.
- logger.error("Error refreshing graph schema", error=str(e)) - error_msg = "Error refreshing graph schema" - logger.error(error_msg) - return False, error_msg + logger.error("Error refreshing graph schema", error=str(e)) + return False, "Error refreshing graph schema"app/templates/components/left_toolbar.j2 (3)
6-13: Add button type and ARIA control wiring for better semantics
- Explicitly set type="button" to avoid unintended form submissions if this markup is ever placed inside a form.
- Connect the toggle to the controlled region with aria-controls and initialize aria-expanded in markup (the script updates it later).
- <button id="burger-toggle-btn" class="action-button toolbar-button" title="Open menu" aria-label="Open menu"> + <button + id="burger-toggle-btn" + type="button" + class="action-button toolbar-button" + title="Open menu" + aria-label="Open menu" + aria-controls="left-toolbar" + aria-expanded="false">
119-135: Pointer handling is robust; add keyboard handler for completenesspointerdown+click with a guard avoids ghost clicks on touch devices. Consider adding a keydown handler so Left/Right or Enter/Space can also toggle without relying on the implicit click synthesis (helps custom keybindings).
btn.addEventListener('click', function (e) { if (ignoreNextClick) { ignoreNextClick = false; return; } handleToggleEvent(); }); + + // Optional: explicit keyboard support + btn.addEventListener('keydown', function (e) { + if (e.key === 'Enter' || e.key === ' ') { + e.preventDefault(); + handleToggleEvent(); + } + // Optional roving: ArrowLeft closes, ArrowRight opens + if (e.key === 'ArrowLeft') setOpen(false); + if (e.key === 'ArrowRight') setOpen(true); + });
136-142: Avoid polluting global scope (make API non-enumerable or namespaced)window.__leftToolbar is fine, but you can make it non-enumerable (and harder to collide with) using defineProperty.
- window.__leftToolbar = { - open: () => setOpen(true), - close: () => setOpen(false), - toggle: () => setOpen(!nav.classList.contains('collapsed')) - }; + Object.defineProperty(window, '__leftToolbar', { + value: { + open: () => setOpen(true), + close: () => setOpen(false), + toggle: () => setOpen(!nav.classList.contains('collapsed')), + }, + configurable: true, + writable: false, + enumerable: false, + });app/public/css/responsive.css (2)
101-177: Risk of overflow on narrow devices; consider more fluid constraints
- The combination of flex-wrap: nowrap plus control max-widths (30% + 35% + other buttons) can overflow at 320–360px widths.
- Suggest using CSS clamp() for widths and allowing wrap when space is too tight.
- .chat-header .button-container { - ... - flex-wrap: nowrap; - align-items: stretch; - } + .chat-header .button-container { + ... + flex-wrap: wrap; /* allow wrapping on very narrow screens */ + align-items: stretch; + } ... - #graph-select { max-width: 30%; } + #graph-select { max-width: clamp(150px, 30%, 45%); } - #custom-file-upload { max-width: 35%; } + #custom-file-upload { max-width: clamp(160px, 35%, 50%); }If strict single-line layout is required, we can switch to overflow: hidden with more aggressive text-overflow rules.
5-31: Unify CSS Breakpoints Across StylesheetsThere’s an inconsistency in our breakpoints: every stylesheet uses
@media (max-width: 768px)exceptapp/public/css/buttons.css, which uses@media (max-width: 767px). This 1px gap can cause unexpected layout shifts between 767px and 768px.• app/public/css/buttons.css (around line 394): change
@media (max-width: 767px) { /* … */ }to
@media (max-width: 768px) { /* … */ }Once updated, all components—chat, buttons, responsive layouts—will share the same breakpoint.
api/routes/graphs.py (4)
85-94: Sanitizers are minimal and effective; consider consistent reusesanitize_query and sanitize_log_input cap size and strip newlines; good baseline. Reuse sanitize_log_input for all log fields that can carry free-form text (e.g., SQL) for uniformity.
473-491: Optional: add server-side timeouts around DB execution and response formattingEven with threadpool offload, runaway queries can monopolize threads. Consider wrapping execute_sql_query and format_response in an application-defined timeout (e.g., 60–120s) and stream a typed timeout error if exceeded.
I can wire a small timeout helper using anyio.move_on_after for async wrappers around run_in_threadpool.
Also applies to: 493-519, 529-543
550-551: Content type mismatch for delimiter-separated streamingYou’re streaming multiple JSON fragments separated by a custom delimiter. application/json implies a single valid JSON document. Consider application/x-ndjson or text/plain; ensure the frontend expects the same.
- return StreamingResponse(generate(), media_type="application/json") + return StreamingResponse(generate(), media_type="application/x-ndjson") ... - return StreamingResponse(generate_confirmation(), media_type="application/json") + return StreamingResponse(generate_confirmation(), media_type="application/x-ndjson")If the client strictly expects application/json, we can leave it and revisit later.
Also applies to: 674-675
287-293: Graph ID normalization is not uniform across endpointsquery_graph caps incoming graph_id to 100 chars, but confirm_destructive_operation and refresh_graph_schema don’t. For consistency and to avoid pathological inputs, apply the same normalization everywhere.
- graph_id = request.state.user_id + "_" + graph_id.strip() + graph_id = request.state.user_id + "_" + graph_id.strip()[:100]Also applies to: 563-571, 685-691
app/public/css/buttons.css (2)
362-392: Pointer-events on collapsed container can be tricky across browsersUsing pointer-events: none on #left-toolbar while relying on pointer-events: auto on the child toggle usually works, but has had edge-case inconsistencies (older Safari). As a defensive tweak, also set position: fixed on the toggle in collapsed mode so it’s clearly out of the zero-width ancestor’s hit-testing box.
#left-toolbar.collapsed #burger-toggle-btn { - display: flex; - width: 48px; - height: 48px; - margin: 6px 0; - position: absolute; - top: 6px; - left: 6px; + display: flex; + width: 48px; + height: 48px; + margin: 6px 0; + position: fixed; /* make hit-testing independent of ancestor width */ + top: 6px; + left: 6px; z-index: 9999; /* keep on top so it's always clickable */ pointer-events: auto; }Alternatively, drop pointer-events: none on the container and rely solely on width: 0 to avoid blocking.
394-438: Unify breakpoint with responsive.cssThis file uses max-width: 767px while responsive.css uses 768px. Align to a single breakpoint to prevent a 1px inconsistent band.
-@media (max-width: 767px) { +@media (max-width: 768px) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Pipfile.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
Pipfile(1 hunks)api/app_factory.py(3 hunks)api/auth/oauth_handlers.py(5 hunks)api/auth/user_management.py(11 hunks)api/graph.py(3 hunks)api/loaders/mysql_loader.py(3 hunks)api/loaders/postgres_loader.py(3 hunks)api/routes/auth.py(11 hunks)api/routes/database.py(4 hunks)api/routes/graphs.py(8 hunks)app/public/css/buttons.css(1 hunks)app/public/css/chat-components.css(1 hunks)app/public/css/responsive.css(3 hunks)app/templates/components/left_toolbar.j2(3 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
Pipfile
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Manage Python dependencies with Pipenv using Pipfile
Files:
Pipfile
api/app_factory.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Implement the application factory (including OAuth setup) in api/app_factory.py
Files:
api/app_factory.py
api/auth/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place authentication modules under api/auth/
Files:
api/auth/oauth_handlers.pyapi/auth/user_management.py
api/routes/database.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Keep database management route handlers in api/routes/database.py
Files:
api/routes/database.py
api/routes/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place all additional Flask route handlers under api/routes/
Files:
api/routes/database.pyapi/routes/graphs.pyapi/routes/auth.py
api/loaders/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place data loading utilities under api/loaders/
Files:
api/loaders/postgres_loader.pyapi/loaders/mysql_loader.py
api/routes/graphs.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Keep graph/database route handlers in api/routes/graphs.py
Files:
api/routes/graphs.py
api/routes/auth.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Keep authentication route handlers in api/routes/auth.py
Files:
api/routes/auth.py
🧠 Learnings (1)
📚 Learning: 2025-08-19T08:18:37.470Z
Learnt from: CR
PR: FalkorDB/QueryWeaver#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-08-19T08:18:37.470Z
Learning: Applies to api/app_factory.py : Implement the application factory (including OAuth setup) in api/app_factory.py
Applied to files:
api/app_factory.py
🧬 Code graph analysis (6)
api/app_factory.py (1)
api/routes/auth.py (1)
init_auth(280-314)
api/auth/oauth_handlers.py (1)
api/auth/user_management.py (1)
ensure_user_in_organizations(16-138)
api/routes/database.py (1)
api/auth/user_management.py (1)
token_required(311-353)
api/loaders/mysql_loader.py (2)
api/loaders/base_loader.py (1)
BaseLoader(7-16)api/loaders/graph_loader.py (1)
load_to_graph(12-184)
api/routes/graphs.py (5)
app/ts/modules/config.ts (1)
MESSAGE_DELIMITER(5-5)api/graph.py (2)
get_db_description(37-52)find(55-118)api/loaders/mysql_loader.py (3)
is_schema_modifying_query(374-402)execute_sql_query(456-525)refresh_graph_schema(405-453)api/loaders/postgres_loader.py (3)
is_schema_modifying_query(336-364)execute_sql_query(418-492)refresh_graph_schema(367-415)api/agents/response_formatter_agent.py (2)
ResponseFormatterAgent(42-135)format_response(49-75)
api/routes/auth.py (1)
api/auth/user_management.py (1)
validate_and_cache_user(193-309)
🪛 GitHub Actions: Pylint
api/graph.py
[error] 10-10: E0401: Unable to import 'api.logging_config' (import-error)
[error] 10-10: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
[warning] 55-55: R0914: Too many local variables (16/15) (too-many-locals)
api/app_factory.py
[error] 13-13: E0401: Unable to import 'api.logging_config' (import-error)
[error] 13-13: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
[warning] 23-23: R0903: Too few public methods (1/2) (too-few-public-methods)
api/auth/oauth_handlers.py
[error] 12-12: E0401: Unable to import 'api.logging_config' (import-error)
[error] 12-12: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
[warning] 48-48: W0718: Catching too general exception Exception (broad-exception-caught)
[warning] 76-76: W0718: Catching too general exception Exception (broad-exception-caught)
api/auth/user_management.py
[error] 11-11: E0401: Unable to import 'api.logging_config' (import-error)
[error] 11-11: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
[warning] 16-16: R0914: Too many local variables (17/15) (too-many-locals)
[warning] 32-32: W0101: Unreachable code (unreachable)
[warning] 136-136: W0718: Catching too general exception Exception (broad-exception-caught)
🪛 GitHub Actions: Tests
api/loaders/postgres_loader.py
[error] 11-11: ModuleNotFoundError: No module named 'api.logging_config' while importing PostgresLoader during pytest collection (pipenv run pytest tests/ -k "not e2e" --verbose).
api/loaders/mysql_loader.py
[error] 12-12: ModuleNotFoundError: No module named 'api.logging_config' while importing MySQLLoader during pytest collection (pipenv run pytest tests/ -k "not e2e" --verbose).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: e2e-tests
🔇 Additional comments (23)
api/auth/user_management.py (9)
104-114: LGTM: structured log for new-user pathClear, structured fields and concise message. Consistent with repo-wide logging approach.
116-124: LGTM: cross-provider link pathGood structured context; message is unambiguous.
126-131: LGTM: existing identity pathConcise info-level event for existing-user login.
145-151: LGTM: validation logging formatGood structured context, helpful for triage.
172-176: LGTM: last-login updated infoMessage + context are clear and actionable.
200-208: Cache TTL: looks reasonable15-minute token validation cache is a sensible default.
226-231: LGTM: token refresh flowClear success/error logs; session cleanup on failure is correct.
307-310: LGTM: final safety netStructured error log and session reset before returning unauthenticated.
346-351: LGTM: decorator error pathLogs context and returns a generic 401 to callers; avoids leaking details.
api/app_factory.py (1)
20-20: LGTM: central module logger initialized earlyConsistent with project-wide refactor.
api/routes/database.py (3)
51-55: LGTM: sanitized, structured error for Postgres pathGood catch of specific exceptions and non-leaky HTTPException.
63-66: LGTM: sanitized, structured error for MySQL pathConsistent behavior across drivers.
87-90: LGTM: generic catch with generic 500Appropriate for this boundary.
api/loaders/postgres_loader.py (1)
379-383: LGTM: structured schema-refresh noticeMessage is concise;
graph_idcaptured as a field. Good.app/templates/components/left_toolbar.j2 (1)
71-92: Nice: setOpen centralizes state + ARIA updatesThe function consistently toggles DOM state, body class, and ARIA attributes. This reduces drift between visuals and accessibility state.
app/public/css/responsive.css (1)
5-31: Good: body and container shifts respect the toolbar-open stateThe left-toolbar-open treatment avoids horizontal scroll and coordinates container width/margins. Looks consistent with the JS toggle.
api/routes/graphs.py (6)
22-29: Structured logging adoption looks goodget_logger(name) and consistent contextual fields align with the project’s logging refactor.
124-132: Error path logs are now structuredLogging graph selection failures with graph and error fields improves observability without leaking PII.
152-157: Query error logging improvedStructured log on graph data query failures is appropriate and keeps responses generic.
273-274: Good: log detailed cause but return generic error on load_graph failureAvoids leaking file contents or DB details to the client.
404-419: Destructive operation detection and confirmation UX is solidCoverage for INSERT/UPDATE/DELETE and DDL is reasonable and the streaming confirmation payload is clear.
95-107: Routes and scope align with repo guidelinesGraph/database route handlers remain under api/routes/graphs.py as required.
app/public/css/buttons.css (1)
298-361: Left toolbar base styles look solidFixed position, sizing, and backdrop effects are cohesive with the new component. Class/ID choices match the template.
| from starlette.middleware.sessions import SessionMiddleware | ||
| from starlette.middleware.base import BaseHTTPMiddleware | ||
|
|
||
| from api.logging_config import get_logger |
There was a problem hiding this comment.
Build is failing: api.logging_config import not found
Same E0401/E0611 here. Until api/logging_config.py is present, app import will fail.
Apply the same guarded import as stopgap:
-from api.logging_config import get_logger
+try:
+ from api.logging_config import get_logger
+except Exception:
+ import logging as _logging
+ def get_logger(name: str):
+ return _logging.getLogger(name)And add the missing module (see suggested implementation below in a separate comment).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from api.logging_config import get_logger | |
| # api/app_factory.py | |
| # … other imports … | |
| # Replace direct import (line 13) with guarded import + fallback | |
| try: | |
| from api.logging_config import get_logger | |
| except Exception: | |
| import logging as _logging | |
| def get_logger(name: str): | |
| return _logging.getLogger(name) | |
| # … rest of factory implementation … |
🧰 Tools
🪛 GitHub Actions: Pylint
[error] 13-13: E0401: Unable to import 'api.logging_config' (import-error)
[error] 13-13: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
🤖 Prompt for AI Agents
In api/app_factory.py around line 13 the direct import "from api.logging_config
import get_logger" causes import failures while api/logging_config.py is
missing; wrap that import in a try/except ImportError and provide a lightweight
fallback (e.g., define or import a simple get_logger stub that returns a basic
logger) so app_factory can be imported during build, and also add the missing
module api/logging_config.py with the intended get_logger implementation so the
guarded fallback is temporary and the real implementation is available.
| from api.logging_config import get_logger | ||
| from .user_management import ensure_user_in_organizations | ||
|
|
||
| logger = get_logger(__name__) |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Fix import error: add api/logging_config.get_logger (blocking CI).
Pylint/pytest fail with Unable to import 'api.logging_config'. The PR introduces the usage but not the module. Add a minimal, production-ready api/logging_config.py that returns a structlog BoundLogger and integrates with stdlib logging.
Here’s a drop-in module you can add as api/logging_config.py:
# api/logging_config.py
from __future__ import annotations
import logging
import os
import sys
import structlog
_CONFIGURED = False
def _configure_logging() -> None:
# Stdlib logging baseline
log_level = os.getenv("LOG_LEVEL", "INFO").upper()
json_logs = os.getenv("JSON_LOGS", "0").lower() in {"1", "true", "yes"}
handler = logging.StreamHandler(sys.stdout)
# Use ProcessorFormatter so stdlib handlers render structlog events
from structlog.stdlib import ProcessorFormatter
formatter = ProcessorFormatter(
processor=structlog.processors.JSONRenderer() if json_logs
else structlog.dev.ConsoleRenderer(colors=True),
foreign_pre_chain=[
structlog.stdlib.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
],
)
handler.setFormatter(formatter)
root = logging.getLogger()
root.handlers = [handler]
root.setLevel(log_level)
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
# Hand off to ProcessorFormatter above for final rendering
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
def get_logger(name: str | None = None):
global _CONFIGURED
if not _CONFIGURED:
_configure_logging()
_CONFIGURED = True
return structlog.get_logger(name)🏁 Script executed:
#!/bin/bash
# Verify import sites and ensure no lingering stdlib `logging` imports remain for these modules
rg -nP 'from\s+api\.logging_config\s+import\s+get_logger' --type=py
rg -nP '^\s*import\s+logging\b' api/auth/oauth_handlers.py api/graph.py api/routes/auth.py api/loaders/mysql_loader.pyLength of output: 804
🏁 Script executed:
#!/bin/bash
# Check if api/logging_config.py exists in the repository
if [ -f api/logging_config.py ]; then
echo "exists"
else
echo "missing"
fiLength of output: 57
Add missing api/logging_config.py to satisfy get_logger imports
The PR introduces calls to from api.logging_config import get_logger in multiple modules, but the api/logging_config.py file is not present—causing import errors and failing CI. Please add a new api/logging_config.py with a production-ready get_logger implementation (for example, the snippet below) so that all imports resolve correctly.
Modules referencing get_logger that need this file:
api/auth/user_management.py(line 11)api/app_factory.py(line 13)api/graph.py(line 10)api/auth/oauth_handlers.py(line 12)api/routes/database.py(line 10)api/loaders/postgres_loader.py(line 11)api/routes/auth.py(line 15)api/loaders/mysql_loader.py(line 12)api/routes/graphs.py(line 22)
Suggested drop-in implementation for api/logging_config.py:
+ # api/logging_config.py
+ from __future__ import annotations
+
+ import logging
+ import os
+ import sys
+
+ import structlog
+
+ _CONFIGURED = False
+
+ def _configure_logging() -> None:
+ log_level = os.getenv("LOG_LEVEL", "INFO").upper()
+ json_logs = os.getenv("JSON_LOGS", "0").lower() in {"1", "true", "yes"}
+
+ handler = logging.StreamHandler(sys.stdout)
+ from structlog.stdlib import ProcessorFormatter
+ formatter = ProcessorFormatter(
+ processor=(
+ structlog.processors.JSONRenderer()
+ if json_logs
+ else structlog.dev.ConsoleRenderer(colors=True)
+ ),
+ foreign_pre_chain=[
+ structlog.stdlib.add_log_level,
+ structlog.processors.TimeStamper(fmt="iso"),
+ ],
+ )
+ handler.setFormatter(formatter)
+
+ root = logging.getLogger()
+ root.handlers = [handler]
+ root.setLevel(log_level)
+
+ structlog.configure(
+ processors=[
+ structlog.contextvars.merge_contextvars,
+ structlog.stdlib.add_logger_name,
+ structlog.stdlib.add_log_level,
+ structlog.stdlib.PositionalArgumentsFormatter(),
+ structlog.processors.StackInfoRenderer(),
+ structlog.processors.format_exc_info,
+ structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
+ ],
+ logger_factory=structlog.stdlib.LoggerFactory(),
+ cache_logger_on_first_use=True,
+ )
+
+ def get_logger(name: str | None = None):
+ global _CONFIGURED
+ if not _CONFIGURED:
+ _configure_logging()
+ _CONFIGURED = True
+ return structlog.get_logger(name)Please add this file to unblock CI.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from api.logging_config import get_logger | |
| from .user_management import ensure_user_in_organizations | |
| logger = get_logger(__name__) | |
| # File: api/logging_config.py | |
| from __future__ import annotations | |
| import logging | |
| import os | |
| import sys | |
| import structlog | |
| _CONFIGURED = False | |
| def _configure_logging() -> None: | |
| log_level = os.getenv("LOG_LEVEL", "INFO").upper() | |
| json_logs = os.getenv("JSON_LOGS", "0").lower() in {"1", "true", "yes"} | |
| handler = logging.StreamHandler(sys.stdout) | |
| from structlog.stdlib import ProcessorFormatter | |
| formatter = ProcessorFormatter( | |
| processor=( | |
| structlog.processors.JSONRenderer() | |
| if json_logs | |
| else structlog.dev.ConsoleRenderer(colors=True) | |
| ), | |
| foreign_pre_chain=[ | |
| structlog.stdlib.add_log_level, | |
| structlog.processors.TimeStamper(fmt="iso"), | |
| ], | |
| ) | |
| handler.setFormatter(formatter) | |
| root = logging.getLogger() | |
| root.handlers = [handler] | |
| root.setLevel(log_level) | |
| structlog.configure( | |
| processors=[ | |
| structlog.contextvars.merge_contextvars, | |
| structlog.stdlib.add_logger_name, | |
| structlog.stdlib.add_log_level, | |
| structlog.stdlib.PositionalArgumentsFormatter(), | |
| structlog.processors.StackInfoRenderer(), | |
| structlog.processors.format_exc_info, | |
| structlog.stdlib.ProcessorFormatter.wrap_for_formatter, | |
| ], | |
| logger_factory=structlog.stdlib.LoggerFactory(), | |
| cache_logger_on_first_use=True, | |
| ) | |
| def get_logger(name: str | None = None): | |
| global _CONFIGURED | |
| if not _CONFIGURED: | |
| _configure_logging() | |
| _CONFIGURED = True | |
| return structlog.get_logger(name) |
🧰 Tools
🪛 GitHub Actions: Pylint
[error] 12-12: E0401: Unable to import 'api.logging_config' (import-error)
[error] 12-12: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
🤖 Prompt for AI Agents
In api/logging_config.py (new file), add a production-ready get_logger
implementation so imports like "from api.logging_config import get_logger"
resolve; create a module that configures Python logging (formatter,
StreamHandler, optional file handler, log level from env, and returns a logger
with provided name) and ensure the implementation is idempotent (doesn't add
duplicate handlers) and safe to call from all listed modules
(api/auth/user_management.py line 11, api/app_factory.py line 13, api/graph.py
line 10, api/auth/oauth_handlers.py line 12, api/routes/database.py line 10,
api/loaders/postgres_loader.py line 11, api/routes/auth.py line 15,
api/loaders/mysql_loader.py line 12, api/routes/graphs.py line 22); add tests or
run CI to verify no import errors.
| logger.error("Error handling Google OAuth callback", error=str(exc)) | ||
| return False |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Log exceptions with stack traces.
Swap logger.error(..., error=str(exc)) for logger.exception(...) to preserve stack traces and improve debuggability with structlog.
Apply:
- except Exception as exc: # capture exception for logging
- logger.error("Error handling Google OAuth callback", error=str(exc))
+ except Exception:
+ logger.exception("Error handling Google OAuth callback")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.error("Error handling Google OAuth callback", error=str(exc)) | |
| return False | |
| except Exception: | |
| logger.exception("Error handling Google OAuth callback") | |
| return False |
🤖 Prompt for AI Agents
In api/auth/oauth_handlers.py around lines 49-50, the current logger call uses
logger.error("Error handling Google OAuth callback", error=str(exc)) which drops
the exception stack; replace it with logger.exception("Error handling Google
OAuth callback") (or logger.error(..., exc_info=True) if you prefer explicit
exc_info) so the full stack trace is captured by structlog for proper debugging.
| logger.error("Error handling GitHub OAuth callback", error=str(exc)) | ||
| return False |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Same here—use logger.exception for full trace.
Consistent with the Google path.
- except Exception as exc: # capture exception for logging
- logger.error("Error handling GitHub OAuth callback", error=str(exc))
+ except Exception:
+ logger.exception("Error handling GitHub OAuth callback")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.error("Error handling GitHub OAuth callback", error=str(exc)) | |
| return False | |
| except Exception: | |
| logger.exception("Error handling GitHub OAuth callback") | |
| return False |
🤖 Prompt for AI Agents
In api/auth/oauth_handlers.py around lines 77-78, the GitHub OAuth error logging
uses logger.error with a stringified exception which loses the traceback;
replace that call with logger.exception("Error handling GitHub OAuth callback")
so the full stack trace and exception context are logged (keep the subsequent
return False unchanged), matching the Google path's handling.
| from api.logging_config import get_logger | ||
|
|
||
| logger = get_logger(__name__) | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Build is failing: api.logging_config import not found
CI shows Pylint E0401/E0611 for this import. Either api/logging_config.py was not committed or the package path is misconfigured. Until the module exists, all modules using get_logger will crash at import-time.
Apply a small guard so the app still runs with stdlib logging if the module is missing:
-from api.logging_config import get_logger
+try:
+ from api.logging_config import get_logger
+except Exception: # ImportError in CI; keep app usable
+ import logging as _logging
+ def get_logger(name: str):
+ return _logging.getLogger(name)Additionally, ensure api/logging_config.py is added to the repo. If you want, I can provide a minimal structlog-based implementation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from api.logging_config import get_logger | |
| logger = get_logger(__name__) | |
| try: | |
| from api.logging_config import get_logger | |
| except Exception: # ImportError in CI; keep app usable | |
| import logging as _logging | |
| def get_logger(name: str): | |
| return _logging.getLogger(name) | |
| logger = get_logger(__name__) |
🧰 Tools
🪛 GitHub Actions: Pylint
[error] 11-11: E0401: Unable to import 'api.logging_config' (import-error)
[error] 11-11: E0611: No name 'logging_config' in module 'api' (no-name-in-module)
🤖 Prompt for AI Agents
In api/auth/user_management.py around lines 11 to 14 the import from
api.logging_config raises import errors in CI because the module may be missing;
wrap the import in a try/except ImportError that falls back to Python's stdlib
logging (configure basic logging if necessary and set logger =
logging.getLogger(__name__)), and add a TODO or comment to ensure
api/logging_config.py is committed to the repo (or create the minimal
structlog-based implementation) so callers use the intended logger when
available.
| # Use a thread pool to enforce timeout | ||
| with ThreadPoolExecutor(max_workers=1) as executor: | ||
| find_start = time.perf_counter() | ||
| future = executor.submit(find, graph_id, queries_history, db_description) | ||
| try: | ||
| _, result, _ = future.result(timeout=120) | ||
| find_elapsed = time.perf_counter() - find_start | ||
| logger.info("Finding relevant tables took", seconds=find_elapsed) | ||
| # Total time for the pre-analysis phase | ||
| step1_elapsed = time.perf_counter() - step1_start | ||
| logger.info("Step 1 (relevancy + table finding) took", seconds=step1_elapsed) | ||
| except FuturesTimeoutError: | ||
| yield json.dumps( | ||
| { | ||
| "type": "error", | ||
| "message": "Timeout finding relevant tables", | ||
| } | ||
| ) + MESSAGE_DELIMITER | ||
| return | ||
| except Exception as e: | ||
| logger.info("Error in find function", error=str(e)) | ||
| yield json.dumps( | ||
| {"type": "error", "message": "Error in find function"} | ||
| ) + MESSAGE_DELIMITER | ||
| return | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid blocking the event loop; offload long-running CPU/IO to a threadpool
Only the find() call is offloaded with a timeout. RelevancyAgent.get_answer, AnalysisAgent.get_analysis, loader_class.execute_sql_query, and ResponseFormatterAgent.format_response are all potentially slow and currently run on the event loop thread inside an async generator, impacting concurrency.
Apply this diff to offload blocking calls using Starlette’s run_in_threadpool:
@@
-from concurrent.futures import ThreadPoolExecutor
+from concurrent.futures import ThreadPoolExecutor
+from starlette.concurrency import run_in_threadpool
@@ async def query_graph(...):
- answer_rel = agent_rel.get_answer(queries_history[-1], db_description)
+ answer_rel = await run_in_threadpool(agent_rel.get_answer, queries_history[-1], db_description)
@@
- answer_an = agent_an.get_analysis(
- queries_history[-1], result, db_description, instructions
- )
+ answer_an = await run_in_threadpool(
+ agent_an.get_analysis, queries_history[-1], result, db_description, instructions
+ )
@@
- query_results = loader_class.execute_sql_query(answer_an["sql_query"], db_url)
+ query_results = await run_in_threadpool(
+ loader_class.execute_sql_query, answer_an["sql_query"], db_url
+ )
@@
- user_readable_response = response_agent.format_response(
+ user_readable_response = await run_in_threadpool(
+ response_agent.format_response,
user_query=queries_history[-1],
sql_query=answer_an["sql_query"],
query_results=query_results,
db_description=db_description,
- )
+ )Do the same in confirm_destructive_operation’s generator for execute_sql_query and format_response. If you want, I can provide the exact diff for that block as well.
Committable suggestion skipped: line range outside the PR's diff.
| logger.info("Error in find function", error=str(e)) | ||
| yield json.dumps( | ||
| {"type": "error", "message": "Error in find function"} | ||
| ) + MESSAGE_DELIMITER | ||
| return |
There was a problem hiding this comment.
Log errors as errors, not info
logger.info("Error in find function", error=...) should be logger.error to be surfaced properly.
- except Exception as e:
- logger.info("Error in find function", error=str(e))
+ except Exception as e:
+ logger.error("Error in find function", error=str(e))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.info("Error in find function", error=str(e)) | |
| yield json.dumps( | |
| {"type": "error", "message": "Error in find function"} | |
| ) + MESSAGE_DELIMITER | |
| return | |
| except Exception as e: | |
| logger.error("Error in find function", error=str(e)) | |
| yield json.dumps( | |
| {"type": "error", "message": "Error in find function"} | |
| ) + MESSAGE_DELIMITER | |
| return |
🤖 Prompt for AI Agents
In api/routes/graphs.py around lines 371 to 375, the exception logging currently
uses logger.info which should be logger.error so the failure is properly
surfaced; change logger.info("Error in find function", error=str(e)) to
logger.error("Error in find function", error=str(e)) (preserving the message
text and error payload) so the exception is logged at error level and keep the
subsequent yield and return unchanged.
| logger.info( | ||
| "SQL Result", | ||
| sql=answer_an.get('sql_query'), | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid logging full SQL at info level; sanitize and downgrade to debug
SQL may include sensitive literals. Truncate and sanitize, and log at debug.
- logger.info(
- "SQL Result",
- sql=answer_an.get('sql_query'),
- )
+ logger.debug(
+ "SQL Result",
+ sql=sanitize_log_input((answer_an.get('sql_query') or '')[:300]),
+ )Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In api/routes/graphs.py around lines 388 to 391, the code currently logs the
full SQL at info level; change this to log a sanitized, truncated version at
debug level. Replace the logger.info call with logger.debug, build a
sanitized_sql string that strips or masks literal values (e.g., replace quoted
strings and numeric literals with placeholders), truncate to a safe length
(e.g., 200 characters) and include an indicator if truncated, and log that
sanitized_sql under the sql key instead of the raw query.
| <button id="theme-toggle-btn" class="action-button toolbar-button" title="Toggle Theme" | ||
| aria-label="Toggle theme" aria-pressed="false" tabindex="-1"> |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Make toolbar controls keyboard-focusable (tabindex -1 blocks Tab navigation)
Theme and Schema buttons have tabindex="-1", which removes them from the natural tab order. Unless you implement a roving tabindex pattern for role="toolbar", these should be focusable with Tab.
- <button id="theme-toggle-btn" class="action-button toolbar-button" title="Toggle Theme"
- aria-label="Toggle theme" aria-pressed="false" tabindex="-1">
+ <button id="theme-toggle-btn" type="button" class="action-button toolbar-button" title="Toggle Theme"
+ aria-label="Toggle theme" aria-pressed="false" tabindex="0">
...
- <button class="action-button toolbar-button" id="schema-button" title="Schema" aria-label="Open schema"
- aria-pressed="false" tabindex="-1">
+ <button class="action-button toolbar-button" id="schema-button" type="button" title="Schema" aria-label="Open schema"
+ aria-pressed="false" tabindex="0">If you intend to use ARIA toolbar keyboarding (arrow-key navigation), say the word and I’ll supply a compact roving-tabindex implementation.
Also applies to: 48-50
🤖 Prompt for AI Agents
In app/templates/components/left_toolbar.j2 around lines 15-16 (and similarly
lines 48-50), the Theme and Schema buttons use tabindex="-1" which removes them
from Tab order; either remove the tabindex attribute (or set tabindex="0") so
they are keyboard-focusable with Tab, or if you truly intend ARIA toolbar
arrow-key roving, replace the static tabindex="-1" with a proper roving-tabindex
implementation (manage tabindex on focus/arrow keys and set role="toolbar" on
the container). Ensure aria-pressed and other state attributes are updated when
buttons receive focus/activation.
| tqdm = "~=4.67.1" | ||
| python-multipart = "~=0.0.10" | ||
| jinja2 = "~=3.1.4" | ||
| structlog = "~=25.4.0" |
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
Good addition; make sure runtime logging config lands with it.
structlog is added, but CI is failing because api.logging_config is missing. Add the new module (see my proposal in oauth_handlers.py comment) so this dependency is actually used; otherwise this line just introduces a new dep without value.
Run to confirm presence of the config module and its get_logger:
🏁 Script executed:
#!/bin/bash
fd -H -a 'logging_config.py' api || true
rg -nP '^\s*def\s+get_logger\(' api/logging_config.py || trueLength of output: 157
Add missing runtime logging configuration
Search confirmed there is no api/logging_config.py, so the import is unresolved and structlog is never used in the codebase. CI failures are occurring because api.logging_config cannot be found.
To resolve:
• Create a new module at api/logging_config.py with a get_logger function that wraps structlog.
• Update any callers (e.g., in oauth_handlers.py) to import and use this new get_logger.
Suggested diff:
+ # api/logging_config.py
+ import structlog
+
+ def get_logger(name=None):
+ """
+ Returns a structlog-compatible logger.
+ """
+ return structlog.get_logger(name)Then, in oauth_handlers.py (or other modules where logging is needed), ensure you have:
from api.logging_config import get_logger
logger = get_logger(__name__)This will ensure the structlog dependency is actually utilized at runtime.
🤖 Prompt for AI Agents
In Pipfile around line 19, CI fails because api.logging_config is missing and
structlog is never used; create a new module at api/logging_config.py that
exposes a get_logger(name) function which configures structlog (processor chain,
renderer, and binds context as needed) and returns a logger for the given name,
then update callers (e.g., oauth_handlers.py) to import get_logger from
api.logging_config and replace any existing logger setup with logger =
get_logger(__name__) so structlog is actually initialized and the import
resolves.
- Bump litellm ~=1.82.0 → ~=1.83.0 (fixes CVE: auth bypass via OIDC cache key collision, privilege escalation via unrestricted proxy config) - Bump fastmcp >=2.13.1 → >=3.2.0 (fixes CVE: SSRF & path traversal in OpenAPI provider) - Add aiohttp>=3.13.4 constraint (fixes multiple HTTP-level CVEs: duplicate Host headers, header injection, multipart bypass, memory DoS, CRLF injection, cookie leaks, SSRF on Windows) - Add npm override for lodash-es to 4.18.1 (fixes prototype pollution via array path bypass, code injection via template imports) - Update fastmcp import path from deprecated fastmcp.server.openapi to fastmcp.server.providers.openapi Closes #125, #124, #123, #122, #121, #120, #119, #118, #117, #116, #115, #114, #113, #112, #111, #110, #109, #108, #107, #106, #105, #104, #103, #102, #101, #100, #99, #98, #97, #96 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Summary by CodeRabbit
New Features
Improvements
Style
Chores