Skip to content

NileshArnaiya/Code-Context-Graph-Memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AI Code Context Graph

A plug-and-play persistent knowledge graph that gives AI coding tools memory, trust scores, and team awareness.

Track AI-generated code across tools (Cursor, Copilot, Antigravity, Claude), detect anti-patterns, enforce quality gates in CI/CD, and monitor everything from a real-time dashboard.


✨ Key Features

Feature Description
Knowledge Graph SQLite-backed graph of your codebase: files, functions, classes, imports, dependencies
AI Attribution Automatically detects which AI tool generated each piece of code (via git history)
Trust Scores 0–100 trust score per file with letter grades (A–F)
Anti-Pattern Detection 7 built-in checks: hardcoded secrets, SQL injection, eval(), missing error handling, etc.
Team Patterns Learns your team's conventions (type hints, testing frameworks, logging practices)
REST API Local FastAPI server on localhost:7878 β€” any tool can query context
GitHub Actions Drop-in CI workflow: scan PRs, post quality reports, block low-trust code
Web Dashboard Next.js dashboard with real-time trust scores, anti-pattern charts, AI tool comparison
Git Hooks Auto-installed post-commit + pre-push hooks for continuous tracking

πŸš€ Quick Start

# Install
pip install ai-code-context  # or: pip install -e .

# Initialize in your repo
cd your-project/
ai-context init

# Start the API daemon
ai-context start

# Run analysis
ai-context analyze

# Scan for anti-patterns
ai-context scan

After ai-context init, the tool will:

  1. Create .ai-context/ directory with graph.db and config.yml
  2. Install git hooks (post-commit, pre-push)
  3. Scan your codebase and build the knowledge graph
  4. Detect AI tools and scan git history for AI-generated commits

πŸ“‹ CLI Commands

Command Description
ai-context init Initialize AI Context in your repository
ai-context start Start the local API daemon (port 7878)
ai-context stop Stop the daemon
ai-context analyze Analyze codebase and build/update the knowledge graph
ai-context scan Scan for anti-patterns and show trust score report
ai-context score [file] Show trust score for a specific file or all files
ai-context log-decision Log an architectural decision to the graph
ai-context status Show current status (graph stats, daemon, detected tools)

Example: Scan output

πŸ€– AI Code Quality Report

  Repository Trust Score: 74/100 ⚠️
  Grade: C | Files: 42 | High Risk: 3

  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ Severity β”‚ Type                 β”‚ File             β”‚ Line β”‚ Message                     β”‚
  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
  β”‚ HIGH     β”‚ HARDCODED_SECRET     β”‚ src/config.py    β”‚ 12   β”‚ Hardcoded secret detected   β”‚
  β”‚ HIGH     β”‚ SECURITY_VULNERABILITYβ”‚ src/utils.py    β”‚ 45   β”‚ eval() usage detected       β”‚
  β”‚ MEDIUM   β”‚ NO_ERROR_HANDLING    β”‚ src/api/handler  β”‚ 23   β”‚ Async function without try  β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🌐 REST API

Start the daemon and query context from any tool:

ai-context start
Endpoint Method Description
/health GET Health check with graph stats
/context?file=path GET Full context bundle for a file
/scores?file=path GET Trust scores
/scan GET Full anti-pattern scan
/commit POST Track a commit (called by git hook)
/track POST Track an AI-generated change
/decision POST Log an architectural decision
/dashboard GET Full dashboard data
/activity GET Recent activity feed
/tools GET Detected AI tools
/analyze POST Trigger full codebase analysis

Example API call

curl http://127.0.0.1:7878/context?file=src/auth/login.py
{
  "file": "src/auth/login.py",
  "trust_score": 72,
  "ai_sessions": [
    {"tool": "cursor", "timestamp": "2024-01-15T10:30:00Z"}
  ],
  "related_files": ["src/auth/utils.py", "src/models/user.py"],
  "patterns": ["Type annotations", "Structured logging"]
}

🎯 Trust Scoring

Score = 100 – Deductions + Boosts

Deductions (anti-patterns found)

Check Severity Points
Hardcoded secrets HIGH -30
SQL injection HIGH -30
Security vulnerabilities (eval, pickle) HIGH -30
Missing error handling MEDIUM -15
TODO/FIXME placeholders MEDIUM -15
Missing edge case handling MEDIUM -15
Generic variable names LOW -5

Boosts (good practices found)

Practice Points
Has tests +10
Type annotations +5
Error handling (try/except) +10
Follows team patterns +15

Grades

Score Grade
90–100 A
80–89 B
70–79 C
60–69 D
0–59 F

πŸ”„ GitHub Actions

Option 1: Reusable workflow

# .github/workflows/ai-quality.yml
name: AI Quality Check
on:
  pull_request:
    branches: [main]

jobs:
  quality:
    uses: ./.github/workflows/ai-quality-check.yml
    with:
      min_trust_score: 60
      block_on_failure: true

Option 2: Composite action

- uses: ai-code-context/action@v1
  with:
    min-trust-score: 60
    block-on-failure: true

PR comments will include:

  • Overall trust score with letter grade
  • Critical issues with fix suggestions
  • Low-trust file list
  • AI tool attribution

πŸ“Š Web Dashboard

cd dashboard/
npm install
npm run dev

Visit http://localhost:3001 for the live dashboard.

The dashboard includes:

  • Trust Score Overview β€” repo-wide score with trend
  • High-Risk Code β€” files needing immediate attention
  • Anti-Patterns β€” horizontal bar chart of detected issues
  • AI Tool Comparison β€” side-by-side scores per tool
  • Activity Feed β€” recent commits and AI sessions
  • Team Patterns β€” learned coding conventions

Dashboard auto-connects to the API daemon. Falls back to demo data if the daemon isn't running.


πŸ“ Project Structure

ai-code-context-graph/
β”œβ”€β”€ ai_code_context/           # Core Python package
β”‚   β”œβ”€β”€ graph/
β”‚   β”‚   β”œβ”€β”€ engine.py          # SQLite knowledge graph
β”‚   β”‚   └── queries.py         # High-level graph queries
β”‚   β”œβ”€β”€ analyzers/
β”‚   β”‚   β”œβ”€β”€ parser.py          # Code analysis engine
β”‚   β”‚   └── patterns.py        # Team pattern detection
β”‚   β”œβ”€β”€ tracking/
β”‚   β”‚   β”œβ”€β”€ git_tracker.py     # Git commit tracking + AI attribution
β”‚   β”‚   └── ai_detector.py     # AI tool detection
β”‚   β”œβ”€β”€ scoring/
β”‚   β”‚   β”œβ”€β”€ trust_calculator.py # Trust score algorithm
β”‚   β”‚   └── anti_patterns.py   # Anti-pattern detection engine
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   └── server.py          # FastAPI REST API
β”‚   β”œβ”€β”€ cli.py                 # Click CLI
β”‚   β”œβ”€β”€ config.py              # Configuration management
β”‚   └── hooks.py               # Git hook installer
β”œβ”€β”€ dashboard/                 # Next.js web dashboard
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ page.js            # Main dashboard page
β”‚   β”‚   β”œβ”€β”€ layout.js          # Root layout
β”‚   β”‚   └── globals.css        # Dark theme CSS
β”‚   └── components/            # React components
β”œβ”€β”€ github-actions/            # GitHub Actions integration
β”‚   β”œβ”€β”€ ai-quality-check.yml   # Reusable workflow
β”‚   └── action.yml             # Composite action
β”œβ”€β”€ tests/
β”‚   └── test_core.py           # Test suite
└── pyproject.toml             # Python package config

βš™οΈ Configuration

After ai-context init, edit .ai-context/config.yml:

# Scanning
exclude_patterns:
  - "*.min.js"
  - "vendor/*"

# Trust thresholds
min_trust_score: 60
block_below_score: true

# AI tool markers (customize per team)
ai_tool_markers:
  cursor: ["cursor", "Cursor"]
  copilot: ["Co-authored-by: GitHub Copilot"]

# API
api_port: 7878
api_host: "127.0.0.1"

πŸ§ͺ Testing

pip install -e ".[dev]"
pytest tests/ -v

πŸ“„ License

MIT

About

AI Code Context Graph - A plug-and-play persistent knowledge graph that gives AI coding tools memory, trust scores, and team awareness. Track AI-generated code across tools (Cursor, Copilot, Antigravity, Claude), detect anti-patterns, enforce quality gates in CI/CD, and monitor everything from a real-time dashboard.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors