Durable workflow runtime and memory system for AI agents.
ctxledger is a remote MCP server for teams that want agent work to be:
- resumable across sessions
- durable across process restarts
- recorded in PostgreSQL as canonical state
- searchable and inspectable later
- observable through CLI and Grafana
It provides:
- workflow lifecycle control
- automatic and explicit memory capture
- bounded historical recall
- file-work metadata capture
- searchable file-work records linked to work loops
- PostgreSQL-backed persistence
- HTTPS-friendly local deployment
- operator-facing observability
- optional derived Apache AGE graph support in the default local stack
The default local setup gives you:
- MCP endpoint:
https://localhost:8443/mcp
- Grafana:
http://localhost:3000
- authenticated HTTPS access
- PostgreSQL 17 with the repository-owned local image path
- Docker Compose startup for the full local stack
The Azure large deployment path gives you:
- MCP endpoint:
- Azure Container Apps HTTPS endpoint
- Azure OpenAI-backed PostgreSQL
azure_aibootstrap - Azure Database for PostgreSQL Flexible Server
- Azure Container Registry remote image build
- Azure Developer CLI (
azd) one-command deployment flow - generated MCP client snippets under
.azure/mcp-snippets
git clone https://github.com/rioriost/ctxledger.git
cd ctxledger
ctxledger expects local certificates for localhost.
A practical setup with mkcert:
mkdir -p docker/traefik/certs
mkcert -install
mkcert -cert-file docker/traefik/certs/localhost.crt -key-file docker/traefik/certs/localhost.key localhost 127.0.0.1 ::1
cp .env.example .env
The fastest way to get a usable local setup is:
- copy
.env.exampleto.env - populate these placeholders:
CTXLEDGER_SMALL_AUTH_TOKENCTXLEDGER_GRAFANA_ADMIN_PASSWORDCTXLEDGER_GRAFANA_POSTGRES_PASSWORD
- then add
OPENAI_API_KEYin your editor
Run the helper script once:
python scripts/populate_env_placeholders.py .env --mode local
The generated Grafana admin password is intentional: it reliably includes upper-case, lower-case, digits, and punctuation so it satisfies Grafana password policy.
If you use envrcctl, use the shell helper script to store the local ctxledger secrets first:
sh scripts/bootstrap_envrcctl_secrets.sh
OPENAI_API_KEY is required for the default local stack because embeddings are enabled.
Open .env and add your key:
OPENAI_API_KEY=replace-with-your-openai-api-key
CTXLEDGER_SMALL_AUTH_TOKEN=generated-value
CTXLEDGER_GRAFANA_ADMIN_USER=admin
CTXLEDGER_GRAFANA_ADMIN_PASSWORD=generated-value
CTXLEDGER_GRAFANA_POSTGRES_USER=ctxledger_grafana
CTXLEDGER_GRAFANA_POSTGRES_PASSWORD=generated-value
If you use envrcctl, store your real OPENAI_API_KEY in envrcctl too.
envrcctl secret set --account 'ctxledger_openai_api_key' OPENAI_API_KEYThe .rules file is required to use ctxledger effectively.
Copy it into the project directory where you use your AI agent for development, and use it there as-is.
docker compose --env-file .env -f docker/docker-compose.yml -f docker/docker-compose.small-auth.yml up -d --build
If you use envrcctl, run:
envrcctl exec -- docker compose -f docker/docker-compose.yml -f docker/docker-compose.small-auth.yml up -d --build
Without auth, the endpoint should reject the request:
python scripts/mcp_http_smoke.py --base-url https://localhost:8443 --expect-http-status 401 --expect-auth-failure --insecure
With auth, the workflow scenario should succeed:
python scripts/mcp_http_smoke.py --base-url https://localhost:8443 --bearer-token YOUR_TOKEN_HERE --scenario workflow --workflow-resource-read --insecure
Replace YOUR_TOKEN_HERE with the value of CTXLEDGER_SMALL_AUTH_TOKEN.
Example Zed configuration:
{
"ctxledger": {
"url": "https://localhost:8443/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN_HERE"
}
}
}
Example VS Code configuration:
"servers": {
"ctxledger": {
"url": "https://localhost:8443/mcp",
"type": "http",
"headers": {
"Authorization": "Bearer YOUR_TOKEN_HERE"
}
}
},
This troubleshooting applies only to the local localhost:8443 Traefik/TLS setup used by the small pattern. It does not apply to the Azure Container Apps endpoint used by the Azure large deployment path.
If your AI agent or other client reports a certificate trust error, first verify which certificate Traefik is serving.
openssl s_client -connect localhost:8443 -servername localhost < /dev/null 2>/dev/null | openssl x509 -noout -subject -issuer
Expected output:
subject=CN=localhost
issuer=CN=localhost
If you see TRAEFIK DEFAULT CERT, the local certificate is not being selected correctly.
The generated certificate file is:
docker/traefik/certs/dev.crt
On macOS, open this certificate in Keychain Access and mark it as trusted.
Typical flow:
- open
docker/traefik/certs/dev.crt - add it to Keychain Access
- open the certificate details
- under Trust, set the certificate to “Always Trust”
After trusting the certificate, reconnect your AI agent to:
https://localhost:8443/mcp
If the endpoint is reachable but your client uses a method that the MCP endpoint does not accept for that probe, you might see an HTTP 405 Method Not Allowed. That indicates method handling differences, not a TLS trust failure.
Use this path when you want to deploy ctxledger to Azure Container Apps with Azure Database for PostgreSQL Flexible Server and Azure OpenAI.
git clone https://github.com/rioriost/ctxledger.git
cd ctxledger
Make sure the Azure CLI and Azure Developer CLI are installed, then sign in and select the subscription you want to use.
az login
azd auth login
az account set --subscription YOUR_SUBSCRIPTION_ID_OR_NAME
The intended happy path is a single command:
azd up
This flow provisions the Azure infrastructure, builds and deploys the container image, bootstraps PostgreSQL / azure_ai, applies the schema, and runs a bounded postdeploy smoke test.
After a successful deployment, azd writes deployment environment values and MCP client snippets to the local workspace.
Important generated paths:
- environment values
.azure/ctxledger/.env
- MCP snippet README
.azure/mcp-snippets/README.md
- MCP snippet summary
.azure/mcp-snippets/summary.json
Use the generated MCP endpoint shown by azd up, or open the snippet README and copy the client configuration that matches your tool.
The deployed endpoint has the form:
https://<your-container-app-fqdn>/mcp
If you are using the current Azure large default flow, a basic HTTP smoke probe might return HTTP 405 Method Not Allowed. That still indicates that the endpoint is reachable; it reflects method handling rather than endpoint unavailability.
An MCP client or agent can:
- register a workspace
- start a workflow
- checkpoint progress with bounded auto-memory capture
- resume work from durable state
- complete a workflow with verification status
- record file-touching work in durable ctxledger state with the
file_work_recordMCP tool - search later for file-linked work context during resume, continue, and debugging
- record explicit high-signal episodes
- search memory with bounded canonical retrieval
- read grouped context optimized for hierarchy-aware clients
- inspect workflow, memory, and failure state
Useful CLI commands:
ctxledger statsctxledger workflowsctxledger memory-statsctxledger failures
docker compose -f docker/docker-compose.yml -f docker/docker-compose.small-auth.yml ps
Grafana is available at:
http://localhost:3000
Log in with:
- username:
CTXLEDGER_GRAFANA_ADMIN_USER
- password:
CTXLEDGER_GRAFANA_ADMIN_PASSWORD
python -m ctxledger.__init__ build-episode-summary \
--episode-id <episode-uuid> \
--summary-kind episode_summary \
--format json
Readiness:
ctxledger age-graph-readiness
Refresh derived summary graph:
ctxledger refresh-age-summary-graph
Bootstrap the constrained graph explicitly:
ctxledger bootstrap-age-graph
The supported local deployment mode in this repository is:
small- HTTPS
- proxy-layer authentication
- Grafana enabled
- Apache AGE enabled
- repository-owned PostgreSQL image path
If you need the current system shape, start with:
- product overview:
docs/project/product/specification.mddocs/project/product/architecture.mddocs/project/product/mcp-api.mddocs/project/product/memory-model.md
- operations:
docs/operations/README.md
- memory docs:
docs/memory/README.md
- release state:
docs/project/releases/CHANGELOG.mddocs/project/releases/0.9.0_acceptance_review.mddocs/project/releases/0.9.0_closeout.md
Useful repository scripts:
scripts/apply_schema.pyscripts/ensure_age_extension.pyscripts/mcp_http_smoke.pyscripts/setup_grafana_observability.py
Core local startup files:
docker/docker-compose.ymldocker/docker-compose.small-auth.yml
Current development posture:
- PostgreSQL state is canonical
- workflow, checkpoint, and projection state remain canonical-first
- summaries, rankings, and graph-backed structures are derived support layers
- file-work metadata is stored without broad file-content indexing
- the default runtime exposes a bounded
file_work_recordMCP tool so agents can record file-touching work in the active work loop - normal file-touching runtime flows should also naturally leave a durable file-work trail when bounded workflow context is available, with explicit
file_work_recordstill available for deliberate higher-signal notes or gap-filling - the README is intentionally brief; use the docs above for details
Licensed under the Apache License, Version 2.0.
See LICENSE.
