Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .jules/bolt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Bolt's Journal

## 2025-05-14 - [Database Indexing for Performance]
**Learning:** Found that the core tables (`chats`, `messages`, `calendar_notes`) were missing indexes on frequently queried and sorted columns (foreign keys and `created_at`). In Drizzle ORM, foreign key constraints do not automatically create indexes in PostgreSQL.
Comment on lines +3 to +4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdown lint: add blank line after heading.

Static analysis (markdownlint MD022) flags that the heading on Line 3 needs a blank line below it before the content on Line 4.

Proposed fix
 ## 2025-05-14 - [Database Indexing for Performance]
+
 **Learning:** Found that the core tables (`chats`, `messages`, `calendar_notes`) were missing indexes on frequently queried and sorted columns (foreign keys and `created_at`). In Drizzle ORM, foreign key constraints do not automatically create indexes in PostgreSQL.
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

🤖 Prompt for AI Agents
In @.jules/bolt.md around lines 3 - 4, Add a blank line after the heading
"2025-05-14 - [Database Indexing for Performance]" to satisfy markdownlint
MD022; edit .jules/bolt.md so the heading line is followed by an empty line
before the paragraph that begins with "**Learning:**" to fix the lint error.

**Action:** Always check for missing indexes on foreign keys and columns used in `WHERE` and `ORDER BY` clauses to ensure query performance scales with data growth.
42 changes: 0 additions & 42 deletions drizzle/migrations/0001_add_calendar_notes.sql

This file was deleted.

80 changes: 80 additions & 0 deletions drizzle/migrations/0001_sync_and_add_indexes.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
CREATE TABLE "calendar_notes" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"chat_id" uuid,
"date" timestamp with time zone NOT NULL,
"content" text NOT NULL,
"location_tags" jsonb,
"user_tags" text[],
"map_feature_id" text,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "chat_participants" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"chat_id" uuid NOT NULL,
"user_id" uuid NOT NULL,
"role" text DEFAULT 'collaborator' NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
CONSTRAINT "chat_participants_chat_user_unique" UNIQUE("chat_id","user_id")
);
--> statement-breakpoint
CREATE TABLE "locations" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"chat_id" uuid,
"geojson" jsonb NOT NULL,
"geometry" geometry(GEOMETRY, 4326),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check if PostGIS and pgvector extensions are created in any migration or setup file
rg -n -i 'CREATE EXTENSION' --type sql
rg -n -i 'CREATE EXTENSION' -g '*.ts' -g '*.js'

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Check the migration file structure to verify the claims
fd 'migrations' -t d | head -20

Repository: QueueLab/QCX

Length of output: 75


🏁 Script executed:

# Read the specific migration file mentioned in the review
find . -name "0001_sync_and_add_indexes.sql" -type f

Repository: QueueLab/QCX

Length of output: 106


🏁 Script executed:

# Read the migration file to verify the content at lines 28, 47, 60
cat -n ./drizzle/migrations/0001_sync_and_add_indexes.sql | sed -n '1,70p'

Repository: QueueLab/QCX

Length of output: 4550


🏁 Script executed:

# Check if 0000 migration exists
ls -la drizzle/migrations/ | head -20

Repository: QueueLab/QCX

Length of output: 377


🏁 Script executed:

# Look for any other database setup or initialization files
fd -e sql -e ts -e js | xargs rg -l 'postgis|pgvector' 2>/dev/null | head -20

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Read the 0000 migration file to check if extensions are created there
cat -n ./drizzle/migrations/0000_sweet_metal_master.sql

Repository: QueueLab/QCX

Length of output: 1764


Add CREATE EXTENSION statements before migration 0001 runs.

Migration 0001_sync_and_add_indexes.sql uses PostGIS types (geometry(GEOMETRY, 4326) at lines 28 and 47) and pgvector (vector(1536) at line 60), but the required extensions are not created anywhere in the codebase. The migration will fail without them. Add CREATE EXTENSION IF NOT EXISTS postgis; and CREATE EXTENSION IF NOT EXISTS vector; to migration 0000 or a dedicated setup script that runs before migration 0001.

🤖 Prompt for AI Agents
In `@drizzle/migrations/0001_sync_and_add_indexes.sql` at line 28, Migration
0001_sync_and_add_indexes.sql uses PostGIS geometry and pgvector types (e.g.,
geometry(GEOMETRY, 4326) and vector(1536)) but never creates the required
extensions; add CREATE EXTENSION IF NOT EXISTS postgis; and CREATE EXTENSION IF
NOT EXISTS vector; in a migration that runs before 0001 (either a new
0000_create_extensions migration or a dedicated setup script) so the extensions
exist before 0001_sync_and_add_indexes.sql executes.

"name" text,
"created_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "system_prompts" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"prompt" text NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "visualizations" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" uuid NOT NULL,
"chat_id" uuid,
"type" text DEFAULT 'map_layer' NOT NULL,
"data" jsonb NOT NULL,
"geometry" geometry(GEOMETRY, 4326),
"created_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "chats" ALTER COLUMN "title" SET DATA TYPE text;--> statement-breakpoint
ALTER TABLE "chats" ALTER COLUMN "title" SET DEFAULT 'Untitled Chat';--> statement-breakpoint
ALTER TABLE "chats" ALTER COLUMN "visibility" SET DATA TYPE text;--> statement-breakpoint
ALTER TABLE "chats" ALTER COLUMN "visibility" SET DEFAULT 'private';--> statement-breakpoint
ALTER TABLE "messages" ALTER COLUMN "role" SET DATA TYPE text;--> statement-breakpoint
ALTER TABLE "chats" ADD COLUMN "path" text;--> statement-breakpoint
ALTER TABLE "chats" ADD COLUMN "share_path" text;--> statement-breakpoint
ALTER TABLE "chats" ADD COLUMN "shareable_link_id" uuid DEFAULT gen_random_uuid();--> statement-breakpoint
ALTER TABLE "chats" ADD COLUMN "updated_at" timestamp with time zone DEFAULT now() NOT NULL;--> statement-breakpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

updated_at columns default to now() but won't auto-update on row modification.

Line 59 (and the calendar_notes/system_prompts table definitions at lines 11, 38) define updated_at with DEFAULT now(), which only sets the value at insert time. Without a database trigger or application-level logic to update this column on every UPDATE, it will remain stale.

Ensure the application layer (e.g., Drizzle queries) explicitly sets updatedAt on every update, or add a PostgreSQL trigger:

CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN NEW.updated_at = now(); RETURN NEW; END;
$$ LANGUAGE plpgsql;
🤖 Prompt for AI Agents
In `@drizzle/migrations/0001_sync_and_add_indexes.sql` at line 59, The UPDATE
behavior for the updated_at columns is incorrect: columns chats.updated_at,
calendar_notes.updated_at, and system_prompts.updated_at are set DEFAULT now()
but never auto-updated on row modification; fix by either (A) ensuring all
update queries in your application/Drizzle explicitly set updatedAt on updates,
or (B) add a PostgreSQL trigger function (e.g., update_updated_at) and attach
AFTER/BEFORE UPDATE triggers to each table (chats, calendar_notes,
system_prompts) so NEW.updated_at = now() is applied on every update; implement
one approach consistently and remove/adjust the DEFAULT only if you keep the
trigger-based approach.

ALTER TABLE "messages" ADD COLUMN "embedding" vector(1536);--> statement-breakpoint
ALTER TABLE "messages" ADD COLUMN "location_id" uuid;--> statement-breakpoint
ALTER TABLE "users" ADD COLUMN "email" text;--> statement-breakpoint
ALTER TABLE "users" ADD COLUMN "role" text DEFAULT 'viewer';--> statement-breakpoint
ALTER TABLE "users" ADD COLUMN "selected_model" text;--> statement-breakpoint
ALTER TABLE "users" ADD COLUMN "system_prompt" text;--> statement-breakpoint
ALTER TABLE "calendar_notes" ADD CONSTRAINT "calendar_notes_user_id_users_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "calendar_notes" ADD CONSTRAINT "calendar_notes_chat_id_chats_id_fk" FOREIGN KEY ("chat_id") REFERENCES "public"."chats"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
Comment on lines +1 to +67

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deleted 0001_add_calendar_notes.sql enabled RLS on calendar_notes and defined per-user policies for SELECT/INSERT/UPDATE/DELETE. The new consolidated migration recreates calendar_notes but does not enable RLS or recreate those policies.

That’s a concrete security regression if the application relied on DB-enforced row isolation (common in Supabase/Postgres setups). With the current migration, access control becomes purely application-side (or wide open if clients have direct DB access).

Suggestion

Re-add the RLS enablement and policies for calendar_notes (and confirm whether other new tables should have RLS as well):

  • ALTER TABLE "calendar_notes" ENABLE ROW LEVEL SECURITY;
  • CREATE POLICY ... statements matching the previous migration.

If this removal was intentional, add explicit replacement policies or justify it with an alternative DB security model.

Reply with "@CharlieHelps yes please" if you want me to add a commit that ports the prior RLS + policy statements into 0001_sync_and_add_indexes.sql.

ALTER TABLE "chat_participants" ADD CONSTRAINT "chat_participants_chat_id_chats_id_fk" FOREIGN KEY ("chat_id") REFERENCES "public"."chats"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "chat_participants" ADD CONSTRAINT "chat_participants_user_id_users_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "locations" ADD CONSTRAINT "locations_user_id_users_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "locations" ADD CONSTRAINT "locations_chat_id_chats_id_fk" FOREIGN KEY ("chat_id") REFERENCES "public"."chats"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "system_prompts" ADD CONSTRAINT "system_prompts_user_id_users_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "visualizations" ADD CONSTRAINT "visualizations_user_id_users_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "visualizations" ADD CONSTRAINT "visualizations_chat_id_chats_id_fk" FOREIGN KEY ("chat_id") REFERENCES "public"."chats"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "calendar_notes_user_id_date_idx" ON "calendar_notes" USING btree ("user_id","date");--> statement-breakpoint
ALTER TABLE "messages" ADD CONSTRAINT "messages_location_id_locations_id_fk" FOREIGN KEY ("location_id") REFERENCES "public"."locations"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "chats_user_id_created_at_idx" ON "chats" USING btree ("user_id","created_at");--> statement-breakpoint
CREATE INDEX "messages_chat_id_created_at_idx" ON "messages" USING btree ("chat_id","created_at");--> statement-breakpoint
Comment on lines +75 to +78

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indexes on large tables can cause long locks and impact availability. CREATE INDEX without CONCURRENTLY will lock writes to the table for the duration of the index build.

If this runs in production on non-trivial data sizes, consider using CREATE INDEX CONCURRENTLY (not allowed inside a transaction) or scheduling maintenance windows.

Suggestion

If production uptime matters, consider building indexes concurrently:

  • Ensure the migration runner can execute statements without wrapping everything in a single transaction.
  • Use CREATE INDEX CONCURRENTLY ... for chats_user_id_created_at_idx, messages_chat_id_created_at_idx, and calendar_notes_user_id_date_idx.

If Drizzle’s migration runner always wraps in a transaction, document that these indexes should be applied during low-traffic periods, or create a separate manual/ops migration path.

Reply with "@CharlieHelps yes please" if you’d like me to add a commit that adjusts the SQL to CONCURRENTLY and splits it into non-transactional steps (if compatible with your migration tooling).

Comment on lines +75 to +78
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Non-concurrent index creation will lock tables — plan for downtime on populated tables.

CREATE INDEX (without CONCURRENTLY) acquires a write lock on the table for the duration of the build. On production with populated chats and messages tables, this can cause downtime. Since drizzle-kit generated migrations typically run inside a transaction (and CONCURRENTLY can't be used in a transaction), this is a known trade-off — but ensure you run this migration during a maintenance window if these tables have significant data.

🤖 Prompt for AI Agents
In `@drizzle/migrations/0001_sync_and_add_indexes.sql` around lines 75 - 78, The
CREATE INDEX statements for "calendar_notes_user_id_date_idx",
"chats_user_id_created_at_idx" and "messages_chat_id_created_at_idx" will take a
write lock if run non-concurrently inside a transaction; to fix, move those
index-creation statements out of the transactional migration or convert them to
CREATE INDEX CONCURRENTLY and ensure the migration runner does not wrap them in
a transaction (or run them in a separate non-transactional migration file),
while leaving the ALTER TABLE "messages" ADD CONSTRAINT
"messages_location_id_locations_id_fk" as-is; update the migration packaging so
indexes run with CONCURRENTLY (or during a maintenance window) against "chats",
"messages", and "calendar_notes" to avoid production table locks.

ALTER TABLE "chats" ADD CONSTRAINT "chats_shareable_link_id_unique" UNIQUE("shareable_link_id");--> statement-breakpoint
ALTER TABLE "users" ADD CONSTRAINT "users_email_unique" UNIQUE("email");
Comment on lines +1 to +80

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration creates several tables (e.g., calendar_notes, chat_participants, locations, system_prompts, visualizations) using plain CREATE TABLE (no IF NOT EXISTS) and then immediately adds constraints and indexes. This is risky if any environment has partial state (e.g., table exists but constraints don’t, or vice versa): the migration will fail hard on the first CREATE TABLE conflict.

Given the PR description mentions “fixes the migration history consistency,” this is exactly the scenario where you’re likely to encounter drift across environments.

Also, the prior deleted migration used CREATE TABLE IF NOT EXISTS and defensive DO $$ ... EXCEPTION WHEN duplicate_object ... blocks for constraints; that safety is lost here.

Suggestion

Consider making this migration idempotent/defensive where feasible:

  • Use CREATE TABLE IF NOT EXISTS ... for new tables.
  • For constraints/indexes, wrap in DO $$ BEGIN ... EXCEPTION WHEN duplicate_object THEN NULL; END $$; (or check catalog tables) so re-runs and drift don’t brick deploys.
  • Alternatively, explicitly document/ensure via tooling that this migration is only ever applied to a pristine DB (and enforce that in pipeline), but right now it looks like it’s intended to fix drift.

If you'd like, reply with "@CharlieHelps yes please" and I can add a commit that adjusts the migration to be defensive (IF NOT EXISTS + duplicate_object guards) while keeping the intended final schema.

Comment on lines +62 to +80

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users.email is added as nullable and then a unique constraint is added. In PostgreSQL, a regular UNIQUE constraint allows multiple NULLs, which might be fine—but if the product expects “at most one NULL” semantics or expects email uniqueness only when present, this needs to be consciously decided.

Also, if you intend case-insensitive uniqueness, a plain unique constraint on text won’t enforce that (e.g., A@x.com vs a@x.com).

Suggestion

Decide and encode the desired email uniqueness semantics:

  • If case-insensitive uniqueness is required: enable citext and change the column type, or use a unique index on lower(email).
  • If you only care about uniqueness when present, current behavior is OK, but consider adding a comment in migration/schema so it’s intentional.

Reply with "@CharlieHelps yes please" if you want me to add a commit implementing case-insensitive uniqueness via citext (or lower(email) index) based on your preference.

Comment on lines +1 to +80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bundling schema sync with index additions in a single migration increases deployment risk.

This migration combines new table creation, column additions, FK constraints, and index creation in one file. If any statement fails mid-way (e.g., a column already exists from a manual fix), the statement-breakpoint markers mean subsequent statements won't execute, potentially leaving the schema partially migrated without the indexes. Consider splitting the schema sync and index additions into separate migration files to reduce blast radius.

🤖 Prompt for AI Agents
In `@drizzle/migrations/0001_sync_and_add_indexes.sql` around lines 1 - 80, This
migration mixes schema changes (CREATE TABLEs: calendar_notes,
chat_participants, locations, system_prompts, visualizations; ALTER TABLE
additions like chats.path/shareable_link_id/updated_at,
messages.embedding/location_id, users.email/role/selected_model/system_prompt;
FK constraints such as calendar_notes_user_id_users_id_fk and
messages_location_id_locations_id_fk) with index/unique creations (CREATE INDEX
calendar_notes_user_id_date_idx, chats_user_id_created_at_idx,
messages_chat_id_created_at_idx and UNIQUE constraints
chats_shareable_link_id_unique, users_email_unique), which raises deployment
risk; split this into at least two migrations: one migration containing all
structural changes and FK constraints (the CREATE TABLE and ALTER TABLE
statements and FK constraints), and a subsequent migration that only creates
indexes and UNIQUE constraints (the CREATE INDEX statements and the
chats_shareable_link_id_unique/users_email_unique UNIQUEs), ensuring each runs
independently so partial failures won’t leave indexes out of sync.

57 changes: 35 additions & 22 deletions drizzle/migrations/meta/0000_snapshot.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
{
"id": "0d46923a-5423-4b73-91cb-5f46741e7ff9",
"prevId": "00000000-0000-0000-0000-000000000000",
"version": "5",
"dialect": "pg",
"version": "7",
"dialect": "postgresql",
"tables": {
"chats": {
"public.chats": {
"name": "chats",
"schema": "",
"columns": {
Expand Down Expand Up @@ -48,21 +46,24 @@
"chats_user_id_users_id_fk": {
"name": "chats_user_id_users_id_fk",
"tableFrom": "chats",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"tableTo": "users",
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
"onUpdate": "no action",
"onDelete": "cascade"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {}
"uniqueConstraints": {},
"policies": {},
"isRLSEnabled": false,
"checkConstraints": {}
},
"messages": {
"public.messages": {
"name": "messages",
"schema": "",
"columns": {
Expand Down Expand Up @@ -110,34 +111,37 @@
"messages_chat_id_chats_id_fk": {
"name": "messages_chat_id_chats_id_fk",
"tableFrom": "messages",
"tableTo": "chats",
"columnsFrom": [
"chat_id"
],
"tableTo": "chats",
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
"onUpdate": "no action",
"onDelete": "cascade"
},
"messages_user_id_users_id_fk": {
"name": "messages_user_id_users_id_fk",
"tableFrom": "messages",
"tableTo": "users",
"columnsFrom": [
"user_id"
],
"tableTo": "users",
"columnsTo": [
"id"
],
"onDelete": "cascade",
"onUpdate": "no action"
"onUpdate": "no action",
"onDelete": "cascade"
}
},
"compositePrimaryKeys": {},
"uniqueConstraints": {}
"uniqueConstraints": {},
"policies": {},
"isRLSEnabled": false,
"checkConstraints": {}
},
"users": {
"public.users": {
"name": "users",
"schema": "",
"columns": {
Expand All @@ -152,14 +156,23 @@
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {}
"uniqueConstraints": {},
"policies": {},
"isRLSEnabled": false,
"checkConstraints": {}
}
},
"enums": {},
"schemas": {},
"_meta": {
"columns": {},
"schemas": {},
"tables": {}
}
"tables": {},
"columns": {}
},
"id": "0d46923a-5423-4b73-91cb-5f46741e7ff9",
"prevId": "00000000-0000-0000-0000-000000000000",
"sequences": {},
"policies": {},
"views": {},
"roles": {}
}
Loading