TriageAI is a complaint management system for financial complaints. It includes:
- admin and team dashboards
- conversational style intake for lodging complaints (chat and optional voice)
- document upload, OCR, and document-aware complaint processing
- Agentic AI for processing complaints
- live workflow traces
- benchmark and production evaluation dashboards
The app is server-rendered with Jinja templates and stores its operational state in PostgreSQL.
Core flow:
- A user lodges a complaint through the intake chat.
- Supporting documents can be uploaded during intake.
- Documents are stored and processed locally.
- The complaint is registered immediately.
- The backend workflow runs:
- document gate
- document consistency check
- classification
- risk
- root cause
- resolution
- compliance / routing
- Admins can review:
- live traces
- complaint analytics
- production evaluation reports
- benchmark evaluation datasets and runs
- LangGraph-based complaint workflow
- OpenAI or DeepSeek chat model support
- PostgreSQL + pgvector retrieval
- OCR pipeline for:
- digital PDFs
- scanned PDFs
- PNG / JPG / JPEG
- session history and past complaints for end users
- production complaint evaluation with:
- system evaluation
- LLM judge report
- benchmark evaluation against DB-backed evaluation datasets
- live trace page backed by persisted workflow runs and steps
- website-friendly case IDs like
CASE00001
- Python 3.11+
- FastAPI
- Jinja2 templates
- SQLAlchemy
- PostgreSQL
- pgvector
- LangGraph / LangChain
- OpenTelemetry-based local workflow tracing
- optional LangSmith tracing for LangChain / LangGraph runs
Required:
- Python 3.11 or newer
- PostgreSQL with pgvector
- one LLM provider configured:
- OpenAI, or
- DeepSeek
Recommended local tools:
uvfor dependency management- Docker for local Postgres
For OCR:
tesseractpoppler/poppler-utils
Copy the example file:
cp .env.example .envMinimum variables to set:
DATABASE_URL=postgresql+psycopg2://postgres:postgres@localhost:5432/complaints
LLM_PROVIDER=openai
OPENAI_API_KEY=...If using DeepSeek instead:
LLM_PROVIDER=deepseek
DEEPSEEK_API_KEY=...Common optional variables:
OPENAI_CHAT_MODELDEEPSEEK_CHAT_MODELEMBEDDING_PROVIDER=huggingfaceoropenaiHF_EMBEDDING_MODELHF_DEVICELOG_LEVELSQL_ECHOTRACE_INTAKE_TO_LANGSMITHLANGCHAIN_TRACING_V2LANGCHAIN_API_KEYLANGCHAIN_PROJECTELEVENLABS_API_KEYELEVENLABS_VOICE_ID
uv sync
uv run python -m uvicorn main:app --reloadpython3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtbrew install tesseract popplersudo apt-get update
sudo apt-get install -y tesseract-ocr poppler-utilsdocker compose up db -dFor Server deployment run both app and db
docker compose up --build -d
docker compose logs -f appStart the server:
python -m uvicorn main:app --reload --host 0.0.0.0 --port 8000Historical workflow cost aggregates are backfilled into the cost ledger automatically at app startup. To run the backfill manually:
python3 scripts/backfill_cost_ledger.pyOpen the Lodge complaint page from the app navigation (or the route your UI exposes for lodging a complaint). There is no separate voice server process: voice uses the same FastAPI app.
The Lodge page supports chat and an optional Voice mode (toggle on the page). Voice uses the browser’s Web Speech API for speech-to-text and, if configured, ElevenLabs for text-to-speech replies.
- Start a session on the Lodge page (same as chat).
- Enable Voice in the UI, then use the microphone control to start voice turns (pause after speaking to send; the agent can reply aloud when TTS is configured).
Browser / security context
- The microphone is only available in a secure context. That usually means:
http://localhost:<port>orhttp://127.0.0.1:<port>— OK for development.- Opening the app via another hostname, another machine on the LAN, or plain
http://on a non-loopback IP may block the mic until you use HTTPS.
- If speech recognition fails with a message about HTTPS or localhost, use local HTTPS below or access the app only via
localhost/127.0.0.1.
Local HTTPS (recommended when the mic is blocked on HTTP)
From the repo root, generate trusted certs once (see comments in the script), then:
./scripts/dev_https.shBy default this serves https://127.0.0.1:8001 (override PORT / HOST if needed). Prerequisites: mkcert and certificate files under .certs/ — see the header comments in scripts/dev_https.sh.
Spoken agent replies (optional)
Set in .env (see .env.example):
ELEVENLABS_API_KEYELEVENLABS_VOICE_ID(or related env names listed in.env.example)
Without these, Voice mode can still use the browser for recognition; spoken replies are disabled until ElevenLabs is configured.
ElevenLabs Conversational AI (optional, external agent)
To connect an ElevenLabs agent with Custom LLM to this backend, point its base URL at your public HTTPS API, for example:
https://<your-host>/api/v1/integrations/elevenlabs
Details and optional Bearer protection are documented in .env.example (ELEVENLABS_CUSTOM_LLM_SECRET, ELEVENLABS_INTAKE_REQUIRE_USER, etc.).
- email:
admin@triage.ai - password:
admin123
- email:
user@triage.ai - password:
user123
Multiple team accounts are seeded automatically :
Passwords follow the pattern: (Team Credentials)[https://github.com/ayman-tech/Multi-Agent-Complaint-System/wiki/Team-Credentials]
password : <local-part>123
No license file is currently included in this repository. Treat usage and redistribution as private unless you add an explicit license.