A marketplace connecting book cover designers/illustrators with publishers and authors.
Tech stack: FastAPI · SQLAlchemy (async) · Alembic · React (Vite) · AWS ECS Fargate · Aurora PostgreSQL · Cognito · S3/CloudFront
- Docker + Docker Compose
cd infrastructure/docker
docker compose upServices:
| Service | URL |
|---|---|
| Frontend | http://localhost:5173 |
| Backend API | http://localhost:8000 |
| PostgreSQL | localhost:5432 |
Local dev uses header-based auth (no Cognito). The frontend automatically sends X-Dev-Role and X-Dev-User-Id headers. The default dev user is:
- Email: admin@localhost
- Role: admin
- User ID: dev-admin-001
This user is created by the seed script when the backend starts. No login required.
If auth behaves unexpectedly (wrong role, missing user), reset with:
docker compose down -v # drops the postgres volume
docker compose upThe backend runs migrations and re-seeds on every startup, so a fresh volume will have the correct state.
Browser → CloudFront → S3 (frontend static files)
→ ALB → ECS Fargate (backend API)
→ Aurora PostgreSQL
→ S3 (media uploads)
→ Cognito (auth, prod only)
- Auth (production): AWS Cognito PKCE OAuth2 → JWT validated by backend
- Auth (local):
X-Dev-*request headers →LocalAuthServicein backend - Storage: Local filesystem in dev; S3 + CloudFront in production
- Migrations: Alembic, run automatically at container startup via
entrypoint.sh - Seeding:
backend/app/seed.py— idempotent, runs at every startup
Push to main → GitHub Actions (.github/workflows/deploy.yml):
- Builds Docker image from
backend/Dockerfile - Pushes to ECR
- Triggers ECS rolling deploy
- Waits for service stability
OIDC is used for AWS auth — no long-lived credentials in secrets.
These are the things that have caused production outages. Read carefully before touching them.
# .github/workflows/deploy.yml — DO NOT CHANGE THIS
docker build \
-f backend/Dockerfile \ ← must be backend/Dockerfile
...
./backendbackend/Dockerfile runs entrypoint.sh which:
- Assembles
DATABASE_URLfrom Secrets Manager env vars - Runs
alembic upgrade head - Runs
python -m app.seed - Starts uvicorn
infrastructure/docker/Dockerfile is a stale file that does none of this. If CI ever uses it, production falls back to SQLite and all data operations fail.
# infrastructure/docker/docker-compose.yml — DO NOT CHANGE THIS
backend:
build:
context: ../../backend
dockerfile: Dockerfile ← resolves to backend/DockerfileSame reason: backend/Dockerfile has the entrypoint that runs migrations and seed.
Never edit a migration file that has already been applied to any database. The migration chain is:
initial → a1b2c3d4e5f6 → c3d4e5f6a7b8 → c3f8a2b1d9e5 (head)
To add a schema change, always create a new migration:
cd backend
alembic revision --autogenerate -m "describe your change"
# Review the generated file, then:
alembic upgrade head- Frontend redirects to Cognito hosted UI
- Cognito issues JWT
- Backend validates JWT via JWKS endpoint
_ensure_db_user()maps Cognitosub→ internalusers.id(creates row if first login)- DB role is authoritative — only admins can change it
- Frontend sends
X-Dev-Role/X-Dev-User-Idheaders (from localStorage or defaults) LocalAuthServicereads headers, falls back to config defaults_ensure_db_user()maps the dev user ID → internalusers.id(finds seeded user)- DB role is still authoritative — seeded admin user has role=admin
To switch roles in local dev, open the browser console and:
localStorage.setItem('dev_role', 'hiring_user');
localStorage.setItem('dev_user_id', 'dev-hiring-001');
// refresh the pageBackend won't start / migration errors
docker compose logs backend
# If schema is out of sync:
docker compose down -v && docker compose upAuth errors / wrong role in local dev
docker compose down -v && docker compose upThis resets the DB so the seed creates a clean admin user.
Production shows wrong data / SQLite errors in logs
Check that deploy.yml is using backend/Dockerfile (not infrastructure/docker/Dockerfile). If it was changed, revert it and push.
ECS service not updating after push Check GitHub Actions tab for deploy status. The deploy waits for service stability — if a container is crashing, the deploy will time out with an error.