Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 37 additions & 5 deletions .github/workflows/docker-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,24 @@ name: Docker Publish

on:
workflow_dispatch:
inputs:
image_tag:
description: Optional extra Docker tag to push, for example 0.1.4
required: false
type: string
push:
branches:
- main
tags:
- 'docker-v*'

jobs:
publish:
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
DOCKERHUB_NAMESPACE: ${{ secrets.DOCKERHUB_NAMESPACE || secrets.DOCKERHUB_USERNAME }}
strategy:
fail-fast: false
matrix:
Expand All @@ -30,19 +38,43 @@ jobs:
uses: docker/setup-buildx-action@v3

- name: Log in to Docker Hub
if: ${{ env.DOCKERHUB_USERNAME != '' && env.DOCKERHUB_TOKEN != '' }}
if: ${{ env.DOCKERHUB_USERNAME != '' && env.DOCKERHUB_TOKEN != '' && env.DOCKERHUB_NAMESPACE != '' }}
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ env.DOCKERHUB_TOKEN }}

- name: Resolve Docker tags
if: ${{ env.DOCKERHUB_USERNAME != '' && env.DOCKERHUB_TOKEN != '' && env.DOCKERHUB_NAMESPACE != '' }}
id: tags
env:
IMAGE_TAG_INPUT: ${{ inputs.image_tag || '' }}
run: |
set -eu
image="${DOCKERHUB_NAMESPACE}/${{ matrix.image }}"
short_sha="$(printf '%s' "$GITHUB_SHA" | cut -c1-12)"
{
printf 'tags<<EOF\n'
printf '%s:sha-%s\n' "$image" "$short_sha"
if [ "$GITHUB_REF_TYPE" = "branch" ] && [ "$GITHUB_REF_NAME" = "main" ]; then
printf '%s:latest\n' "$image"
fi
if [ "$GITHUB_REF_TYPE" = "tag" ]; then
case "$GITHUB_REF_NAME" in
docker-v*) printf '%s:%s\n' "$image" "${GITHUB_REF_NAME#docker-v}" ;;
esac
fi
if [ -n "$IMAGE_TAG_INPUT" ]; then
printf '%s:%s\n' "$image" "$IMAGE_TAG_INPUT"
fi
printf 'EOF\n'
} >> "$GITHUB_OUTPUT"

- name: Build and push
if: ${{ env.DOCKERHUB_USERNAME != '' && env.DOCKERHUB_TOKEN != '' }}
if: ${{ env.DOCKERHUB_USERNAME != '' && env.DOCKERHUB_TOKEN != '' && env.DOCKERHUB_NAMESPACE != '' }}
uses: docker/build-push-action@v6
with:
context: .
push: true
target: ${{ matrix.target }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.image }}:latest
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.image }}:sha-${{ github.sha }}
tags: ${{ steps.tags.outputs.tags }}
51 changes: 47 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Involute

[![npm version](https://img.shields.io/npm/v/@turnkeyai/involute?label=npm)](https://www.npmjs.com/package/@turnkeyai/involute)
[![CI](https://github.com/fakechris/Involute/actions/workflows/ci.yml/badge.svg)](https://github.com/fakechris/Involute/actions/workflows/ci.yml)
[![Docker Publish](https://github.com/fakechris/Involute/actions/workflows/docker-publish.yml/badge.svg)](https://github.com/fakechris/Involute/actions/workflows/docker-publish.yml)

一人团队的 epic / issue / team / workspace 项目管理系统开源实现。

Involute bundles a GraphQL API, a kanban web app, and a CLI that can export one team snapshot, import it into Involute, verify the result, and then let you visually accept it in the board UI.
Expand Down Expand Up @@ -135,6 +139,13 @@ Stop the stack with:
pnpm compose:down
```

If you want to run the published Docker Hub images instead of building from source, use:

```bash
INVOLUTE_IMAGE_NAMESPACE=turnkeyai INVOLUTE_IMAGE_TAG=latest pnpm compose:pull
INVOLUTE_IMAGE_NAMESPACE=turnkeyai INVOLUTE_IMAGE_TAG=latest pnpm compose:pull:up
```

## VPS deployment (fresh install)

This is the recommended first production path: one VPS, Docker Compose, Postgres, the Node API, the static web container, and Caddy terminating HTTPS on a single domain.
Expand Down Expand Up @@ -426,16 +437,48 @@ The Playwright suite verifies the core board lifecycle: create, update, comment,

## Docker images

This repo ships one multi-target `Dockerfile` with `server`, `web-dev`, `web`, and `cli` targets. The Docker Hub publish workflow expects these secrets:
This repo ships one multi-target `Dockerfile` with `server`, `web-dev`, `web`, and `cli` targets.

Published images:

```bash
docker pull turnkeyai/involute-server:latest
docker pull turnkeyai/involute-web:latest
docker pull turnkeyai/involute-cli:latest
```

Run the compose stack from published images:

```bash
INVOLUTE_IMAGE_NAMESPACE=turnkeyai INVOLUTE_IMAGE_TAG=latest \
docker compose -f docker-compose.images.yml up -d db server web
```

Production compose can use the same published images:

```bash
INVOLUTE_IMAGE_NAMESPACE=turnkeyai INVOLUTE_IMAGE_TAG=latest \
docker compose --env-file .env.production \
-f docker-compose.prod.images.yml up -d
```

Image tags:

- `latest` — latest successful push from `main`
- `sha-<short-sha>` — immutable commit image
- `<version>` — pushed from `docker-v<version>` tags or `workflow_dispatch` input

The Docker Hub publish workflow expects these secrets:

- `DOCKERHUB_USERNAME`
- `DOCKERHUB_TOKEN`
- `DOCKERHUB_NAMESPACE` — optional; defaults to `DOCKERHUB_USERNAME`

When they are set, `.github/workflows/docker-publish.yml` pushes:

- `${DOCKERHUB_USERNAME}/involute-server`
- `${DOCKERHUB_USERNAME}/involute-web`
- `${DOCKERHUB_USERNAME}/involute-cli`
- `${DOCKERHUB_NAMESPACE}/involute-server`
- `${DOCKERHUB_NAMESPACE}/involute-web`
- `${DOCKERHUB_NAMESPACE}/involute-cli`

The published `involute-web` image is a static production build. It bakes `VITE_INVOLUTE_GRAPHQL_URL` at build time, but it does not bake an auth token into the image. For local development and acceptance, the compose stack remains the reference runtime path and should stay green before publishing.

Expand Down
110 changes: 110 additions & 0 deletions docker-compose.images.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: involute
POSTGRES_PASSWORD: involute
POSTGRES_USER: involute
healthcheck:
test: ["CMD-SHELL", "pg_isready -U involute -d involute"]
interval: 5s
timeout: 5s
retries: 20
ports:
- "${DB_BIND_ADDRESS:-127.0.0.1}:${DB_PORT:-5434}:5432"
volumes:
- postgres-data:/var/lib/postgresql/data

server-init:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-server:${INVOLUTE_IMAGE_TAG:-latest}
depends_on:
db:
condition: service_healthy
entrypoint:
- /bin/sh
- -lc
command:
- >
set -e;
pnpm --filter @turnkeyai/involute-server exec prisma migrate deploy;
if [ "${SEED_DATABASE:-true}" = "true" ]; then
pnpm --filter @turnkeyai/involute-server exec prisma db seed;
fi;
if [ -n "${ADMIN_EMAIL_ALLOWLIST:-}" ]; then
pnpm --filter @turnkeyai/involute-server exec tsx prisma/bootstrap-admin.ts;
fi
environment:
ADMIN_EMAIL_ALLOWLIST: ${ADMIN_EMAIL_ALLOWLIST:-}
DATABASE_URL: postgresql://involute:involute@db:5432/involute?schema=public
SEED_DATABASE: ${SEED_DATABASE:-true}
SEED_DEFAULT_ADMIN: ${SEED_DEFAULT_ADMIN:-false}
restart: "no"

server:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-server:${INVOLUTE_IMAGE_TAG:-latest}
depends_on:
db:
condition: service_healthy
server-init:
condition: service_completed_successfully
environment:
ADMIN_EMAIL_ALLOWLIST: ${ADMIN_EMAIL_ALLOWLIST:-}
ALLOW_ADMIN_FALLBACK: ${ALLOW_ADMIN_FALLBACK:-false}
DATABASE_URL: postgresql://involute:involute@db:5432/involute?schema=public
AUTH_TOKEN: ${AUTH_TOKEN:-changeme-set-your-token}
VIEWER_ASSERTION_SECRET: ${VIEWER_ASSERTION_SECRET:-compose-viewer-secret}
GOOGLE_OAUTH_ADMIN_EMAILS: ${GOOGLE_OAUTH_ADMIN_EMAILS:-}
GOOGLE_OAUTH_CLIENT_ID: ${GOOGLE_OAUTH_CLIENT_ID:-}
GOOGLE_OAUTH_CLIENT_SECRET: ${GOOGLE_OAUTH_CLIENT_SECRET:-}
GOOGLE_OAUTH_REDIRECT_URI: ${GOOGLE_OAUTH_REDIRECT_URI:-http://localhost:4200/auth/google/callback}
APP_ORIGIN: ${APP_ORIGIN:-http://localhost:4201}
PORT: 4200
healthcheck:
test:
[
"CMD",
"node",
"-e",
"fetch('http://127.0.0.1:4200/health').then((response)=>process.exit(response.ok?0:1)).catch(()=>process.exit(1))",
]
interval: 5s
timeout: 5s
retries: 20
start_period: 10s
ports:
- "${SERVER_BIND_ADDRESS:-0.0.0.0}:4200:4200"

web:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-web:${INVOLUTE_IMAGE_TAG:-latest}
depends_on:
server:
condition: service_healthy
environment:
INTERNAL_SERVER_ORIGIN: ${INTERNAL_SERVER_ORIGIN:-http://server:4200}
WEB_PROXY_AUTHORIZATION: ${WEB_PROXY_AUTHORIZATION:-}
healthcheck:
test: ["CMD-SHELL", "curl -fsS http://127.0.0.1:4201 >/dev/null || exit 1"]
interval: 5s
timeout: 5s
retries: 20
start_period: 10s
ports:
- "${WEB_BIND_ADDRESS:-0.0.0.0}:4201:4201"

cli:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-cli:${INVOLUTE_IMAGE_TAG:-latest}
depends_on:
db:
condition: service_healthy
server:
condition: service_healthy
Comment on lines +94 to +100
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The cli service is a one-off tool rather than a core part of the long-running stack. Adding a tools profile prevents it from starting and immediately exiting when running a general docker compose up -d command, keeping the stack status cleaner.

  cli:
    image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-cli:${INVOLUTE_IMAGE_TAG:-latest}
    profiles: ["tools"]
    depends_on:
      db:
        condition: service_healthy
      server:
        condition: service_healthy

environment:
AUTH_TOKEN: ${AUTH_TOKEN:-changeme-set-your-token}
DATABASE_URL: postgresql://involute:involute@db:5432/involute?schema=public
INVOLUTE_CONFIG_PATH: /tmp/involute-config.json
VIEWER_ASSERTION_SECRET: ${VIEWER_ASSERTION_SECRET:-compose-viewer-secret}
volumes:
- ./.tmp:/exports

volumes:
postgres-data:
124 changes: 124 additions & 0 deletions docker-compose.prod.images.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
services:
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-involute}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?Set POSTGRES_PASSWORD in .env.production}
POSTGRES_USER: ${POSTGRES_USER:-involute}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-involute} -d ${POSTGRES_DB:-involute}"]
interval: 10s
timeout: 5s
retries: 20
volumes:
- postgres-prod-data:/var/lib/postgresql/data

server-init:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-server:${INVOLUTE_IMAGE_TAG:-latest}
restart: "no"
depends_on:
db:
condition: service_healthy
entrypoint:
- /bin/sh
- -lc
command:
- >
set -e;
pnpm --filter @turnkeyai/involute-server exec prisma migrate deploy;
if [ "${SEED_DATABASE:-false}" = "true" ]; then
pnpm --filter @turnkeyai/involute-server exec prisma db seed;
fi;
if [ -n "${ADMIN_EMAIL_ALLOWLIST:-}" ]; then
pnpm --filter @turnkeyai/involute-server exec tsx prisma/bootstrap-admin.ts;
fi
environment:
ADMIN_EMAIL_ALLOWLIST: ${ADMIN_EMAIL_ALLOWLIST:-}
GOOGLE_OAUTH_ADMIN_EMAILS: ${ADMIN_EMAIL_ALLOWLIST:-${GOOGLE_OAUTH_ADMIN_EMAILS:-}}
DATABASE_URL: postgresql://${POSTGRES_USER:-involute}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB:-involute}?schema=public
SEED_DATABASE: ${SEED_DATABASE:-false}
SEED_DEFAULT_ADMIN: "false"

server:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-server:${INVOLUTE_IMAGE_TAG:-latest}
restart: unless-stopped
depends_on:
db:
condition: service_healthy
server-init:
condition: service_completed_successfully
environment:
ADMIN_EMAIL_ALLOWLIST: ${ADMIN_EMAIL_ALLOWLIST:-}
GOOGLE_OAUTH_ADMIN_EMAILS: ${ADMIN_EMAIL_ALLOWLIST:-${GOOGLE_OAUTH_ADMIN_EMAILS:-}}
ALLOW_ADMIN_FALLBACK: "false"
APP_ORIGIN: ${APP_ORIGIN:?Set APP_ORIGIN in .env.production}
AUTH_TOKEN: ${AUTH_TOKEN:?Set AUTH_TOKEN in .env.production}
DATABASE_URL: postgresql://${POSTGRES_USER:-involute}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB:-involute}?schema=public
GOOGLE_OAUTH_CLIENT_ID: ${GOOGLE_OAUTH_CLIENT_ID:-}
GOOGLE_OAUTH_CLIENT_SECRET: ${GOOGLE_OAUTH_CLIENT_SECRET:-}
GOOGLE_OAUTH_REDIRECT_URI: ${GOOGLE_OAUTH_REDIRECT_URI:-}
PORT: 4200
SESSION_TTL_SECONDS: ${SESSION_TTL_SECONDS:-2592000}
VIEWER_ASSERTION_SECRET: ${VIEWER_ASSERTION_SECRET:?Set VIEWER_ASSERTION_SECRET in .env.production}
healthcheck:
test:
[
"CMD",
"node",
"-e",
"fetch('http://127.0.0.1:4200/health').then((response)=>process.exit(response.ok?0:1)).catch(()=>process.exit(1))",
]
interval: 10s
timeout: 5s
retries: 20
start_period: 10s

web:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-web:${INVOLUTE_IMAGE_TAG:-latest}
environment:
INTERNAL_SERVER_ORIGIN: http://server:4200
WEB_PROXY_AUTHORIZATION: ""
restart: unless-stopped
depends_on:
server:
condition: service_healthy
Comment on lines +77 to +85
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The web service is missing a healthcheck in the production compose file, which is inconsistent with the local docker-compose.images.yml. Adding one ensures that dependent services like Caddy only start routing traffic once Nginx is actually ready.

  web:
    image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-web:${INVOLUTE_IMAGE_TAG:-latest}
    environment:
      INTERNAL_SERVER_ORIGIN: http://server:4200
      WEB_PROXY_AUTHORIZATION: ""
    healthcheck:
      test: ["CMD-SHELL", "curl -fsS http://127.0.0.1:4201 >/dev/null || exit 1"]
      interval: 10s
      timeout: 5s
      retries: 20
      start_period: 10s
    restart: unless-stopped
    depends_on:
      server:
        condition: service_healthy


caddy:
image: caddy:2.10-alpine
restart: unless-stopped
depends_on:
server:
condition: service_healthy
web:
condition: service_started
Comment on lines +93 to +94
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

With the addition of a healthcheck to the web service, Caddy should wait for the service to be healthy rather than just started.

      web:
        condition: service_healthy

environment:
APP_DOMAIN: ${APP_DOMAIN:?Set APP_DOMAIN in .env.production}
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
- caddy-config:/config

cli:
image: ${INVOLUTE_IMAGE_REGISTRY:-docker.io}/${INVOLUTE_IMAGE_NAMESPACE:-turnkeyai}/involute-cli:${INVOLUTE_IMAGE_TAG:-latest}
profiles: ["tools"]
depends_on:
db:
condition: service_healthy
server:
condition: service_healthy
environment:
AUTH_TOKEN: ${AUTH_TOKEN:?Set AUTH_TOKEN in .env.production}
DATABASE_URL: postgresql://${POSTGRES_USER:-involute}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB:-involute}?schema=public
INVOLUTE_CONFIG_PATH: /tmp/involute-config.json
VIEWER_ASSERTION_SECRET: ${VIEWER_ASSERTION_SECRET:?Set VIEWER_ASSERTION_SECRET in .env.production}
volumes:
- ./.tmp:/exports

volumes:
postgres-prod-data:
caddy-data:
caddy-config:
4 changes: 4 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@
"compose:prod:build": "docker compose --env-file .env.production -f docker-compose.prod.yml build",
"compose:prod:down": "docker compose --env-file .env.production -f docker-compose.prod.yml down --remove-orphans",
"compose:prod:up": "docker compose --env-file .env.production -f docker-compose.prod.yml up --build -d",
"compose:pull": "docker compose -f docker-compose.images.yml pull server web cli",
"compose:pull:up": "docker compose -f docker-compose.images.yml up -d db server web",
"compose:prod:pull": "docker compose --env-file .env.production -f docker-compose.prod.images.yml pull server web cli",
"compose:prod:pull:up": "docker compose --env-file .env.production -f docker-compose.prod.images.yml up -d",
Comment on lines +16 to +19
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The pull and up scripts can be simplified by removing explicit service names. docker compose pull will automatically pull all images defined in the file (including infrastructure like Postgres and Caddy), and docker compose up -d will start the core stack while respecting service profiles (e.g., skipping the cli tool if it has the tools profile).

Suggested change
"compose:pull": "docker compose -f docker-compose.images.yml pull server web cli",
"compose:pull:up": "docker compose -f docker-compose.images.yml up -d db server web",
"compose:prod:pull": "docker compose --env-file .env.production -f docker-compose.prod.images.yml pull server web cli",
"compose:prod:pull:up": "docker compose --env-file .env.production -f docker-compose.prod.images.yml up -d",
"compose:pull": "docker compose -f docker-compose.images.yml pull",
"compose:pull:up": "docker compose -f docker-compose.images.yml up -d",
"compose:prod:pull": "docker compose --env-file .env.production -f docker-compose.prod.images.yml pull",
"compose:prod:pull:up": "docker compose --env-file .env.production -f docker-compose.prod.images.yml up -d",

"docker:build": "docker compose build",
"e2e": "playwright test",
"e2e:headed": "playwright test --headed",
Expand Down
Loading