Try it: Ask “How often should I train each muscle group?” and switch between Beginner and Intermediate modes
InForm is an evidence-based AI assistant for training and nutrition
It combines a modern conversational UI with a retrieval-augmented backend grounded in peer-reviewed research. The project is built as a production-style, full-stack system to demonstrate fundamentals, clean API design, and product decisions
- 115-study on-disk corpus; hybrid TF-IDF + dense retrieval (0.4/0.6) with citation validation
- 99-answer eval: 620 chars avg, 1.63 citations/answer; 56.6% high-confidence
- Production: gpt-3.5-turbo, ~3-8s typical latency
Most fitness and nutrition apps rely on generic advice or opaque AI outputs
InForm addresses this by:
- Grounding answers in retrieved research passages
- Adapting explanation depth via Beginner / Intermediate modes
- Exposing citations and confidence to the user
- Evidence-based Q&A for training, nutrition, recovery, and supplements
- Retrieval-Augmented Generation (RAG) over curated studies
- Mode-aware responses (beginner vs. intermediate)
- Confidence scoring and explicit citations
- Architecture:
docs/architecture.md - Retrieval:
docs/retrieval.md - Retrieval scripts:
docs/retrieval_scripts.md - Evaluation:
docs/evaluation.md - Corpus updates:
docs/corpus_update.md - Deploy commands:
docs/deploy.md
- Corpus: 115 curated studies (CSV + JSON passages)
- Batch eval: 99 answers - avg 620 chars, 1.63 citations/answer
- Confidence: 56.6% high / 23.2% medium / 20.2% low
- Production model: gpt-3.5-turbo (~3-8s typical)
Frontend: Vite + React + TypeScript, Tailwind, shadcn/ui
Backend: FastAPI (Python)
Retrieval: Hybrid TF-IDF + dense embeddings (SentenceTransformer all-MiniLM-L6-v2)
Grounding: Inline citations + post-generation citation validation/renumbering
MIT License