| Version | Supported |
|---|---|
| 0.18.x | ✅ |
| 0.13.x | ✅ |
| < 0.13 | ❌ |
We take security seriously. If you discover a security vulnerability in Codi, please report it responsibly.
Please do NOT open a public GitHub issue for security vulnerabilities.
Instead, please email security concerns to: codi@layne.pro
Include the following information in your report:
- Description: A clear description of the vulnerability
- Steps to Reproduce: Detailed steps to reproduce the issue
- Impact: What an attacker could achieve by exploiting this vulnerability
- Affected Versions: Which versions of Codi are affected
- Suggested Fix: If you have one (optional)
- Acknowledgment: We will acknowledge receipt of your report within 48 hours
- Updates: We will provide updates on our progress as we investigate
- Resolution: We aim to resolve critical vulnerabilities within 7 days
- Credit: We will credit you in the release notes (unless you prefer anonymity)
Codi is a CLI tool that gives AI models access to your local filesystem and shell. This document describes the security architecture and known risks.
┌─────────────────────────────────────────────────────────────────┐
│ USER'S MACHINE │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ CODI PROCESS │ │
│ │ ┌──────────────────┐ ┌─────────────────────────┐ │ │
│ │ │ AI Provider │◄───│ Agent Loop │ │ │
│ │ │ (API calls) │ │ (orchestrates tools) │ │ │
│ │ └──────────────────┘ └───────────┬─────────────┘ │ │
│ │ │ │ │
│ │ ┌──────────────────────────────────▼──────────────┐ │ │
│ │ │ TOOL REGISTRY │ │ │
│ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌────────┐ │ │ │
│ │ │ │read_file│ │write_fil│ │ bash │ │ glob │ │ │ │
│ │ │ └────┬────┘ └────┬────┘ └────┬────┘ └───┬────┘ │ │ │
│ │ └───────┼───────────┼───────────┼──────────┼──────┘ │ │
│ └──────────┼───────────┼───────────┼──────────┼───────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ LOCAL FILESYSTEM / SHELL │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Codi operates with the same permissions as the user running it. There is no sandboxing or privilege separation. The security model relies on:
- User Approval - Dangerous operations require user confirmation
- Pattern Blocking - Known dangerous commands are blocked
- Path Validation - File operations are restricted to the project directory
- Audit Logging - All tool calls can be logged for review
| Asset | Protection Mechanism |
|---|---|
| Files outside project | Path traversal validation |
| System files | Dangerous pattern detection |
| Credentials | Pattern detection for secrets |
| User confirmation | Required for destructive ops |
Risk: Malicious content in files could manipulate the AI to perform unintended actions.
Mitigations:
- User approval for all file modifications and shell commands
- Dangerous pattern detection before execution
- Audit logging for forensic analysis
Residual Risk: Medium - A sophisticated attack could potentially bypass pattern detection.
Risk: AI could be tricked into reading/writing files outside the project directory.
Mitigations:
- Path resolution and validation before file operations
- Rejection of paths containing
..that escape project root - Symlink following with boundary checks
Residual Risk: Low - Validation is performed at the tool level.
Risk: Malicious input could lead to execution of unintended shell commands.
Mitigations:
- Blocking patterns for dangerous commands (
rm -rf /,sudo, etc.) - User confirmation for all bash commands
- Commands executed via shell (allows user to see full command)
Residual Risk: Medium - Complex command chains could bypass pattern detection.
Risk: AI could accidentally include API keys or secrets in outputs/commits.
Mitigations:
- Pattern detection for common secret formats
- Warning when
.envfiles are involved - Git pre-commit hooks recommended (external)
Residual Risk: Medium - Novel secret formats may not be detected.
Risk: AI could consume excessive resources (infinite loops, large files).
Mitigations:
- Maximum iterations limit (50 by default)
- Wall-clock timeout (1 hour)
- Output truncation for large results
- Rate limiting on API calls
- Message array bounds (500 max)
Residual Risk: Low - Bounded by hard limits.
Risk: AI could send sensitive data to external services.
Mitigations:
- Web search is read-only (no POST capability)
- User can review all AI actions
- Network access is through approved tools only
Residual Risk: Low - Requires user approval for most operations.
The following patterns are blocked or require confirmation:
| Category | Examples |
|---|---|
| Destructive | rm -rf, mkfs, dd if= |
| Privilege Escalation | sudo, su -, chmod 777 |
| System Modification | systemctl, service stop |
| Remote Execution | curl | sh, wget | bash |
| Git Force Operations | git push --force, git reset --hard |
| Container Escape | docker run --privileged |
Users can configure security settings in .codi.json:
{
"autoApprove": ["read_file", "glob", "grep"],
"dangerousPatterns": ["custom-pattern-.*"],
"approvedCategories": ["read-only"]
}Recommendations:
- Keep
autoApproveminimal - Add project-specific dangerous patterns
- Enable audit logging for security-sensitive work
Enable with --audit flag or CODI_AUDIT=true:
codi --auditLogs are written to ~/.codi/audit/<session>.jsonl and include:
- All tool calls with inputs and outputs
- API requests and responses
- User confirmations and denials
- Errors and aborts
When using multi-agent orchestration (/delegate):
- Each worker runs in an isolated git worktree
- Permission requests are routed to the parent process
- All workers share the same user approval flow
- IPC uses Unix domain sockets (not network-exposed)
- Never commit API keys to version control
- Use environment variables for sensitive credentials
- Rotate API keys periodically
- Review tool operations before approving
- Be cautious with bash commands from untrusted sources
- Use the diff preview feature before file modifications
- Keep
.codi.local.jsonin.gitignore(it contains your approval patterns) - Don't share configuration files containing sensitive paths
- Review auto-approve settings carefully
- Review before approval - Always read the command/file before confirming
- Use version control - Codi works best in git repos for easy rollback
- Enable audit logging - For sensitive work, use
--audit - Minimal auto-approve - Only auto-approve read-only operations
- Regular updates - Keep Codi updated for security patches
- Isolated environments - Consider running in containers for untrusted projects
- Tool Approval System: Dangerous operations require explicit user approval
- Diff Preview: See exactly what changes will be made before confirming
- Dangerous Pattern Detection: Warns about potentially harmful bash commands
- Path Validation: Prevents access to files outside project directory
- Undo History: Recover from unintended file modifications
- Memory Bounds: Limits on message history prevent resource exhaustion
- Rate Limiting: Prevents API abuse and runaway loops
- Audit Logging: Complete session recording for forensic analysis
- No sandboxing - Codi has full user permissions
- Shell command visibility - Complex shell commands may be hard to audit
- Pattern-based detection - Can be bypassed with obfuscation
- Trust in AI provider - API responses are generally trusted
The following are in scope for security reports:
- Command injection vulnerabilities
- Path traversal attacks
- Credential/API key exposure
- Arbitrary code execution
- Authentication/authorization bypasses
- Memory exhaustion or DoS vectors
- Issues in third-party dependencies (report these upstream)
- Social engineering attacks
- Physical security issues
- Issues requiring physical access to a user's machine
Security updates are released as patch versions. Subscribe to GitHub releases for notifications.
| Version | Date | Security Changes |
|---|---|---|
| 0.14.1 | 2026-01 | Path traversal protection, memory bounds |
| 0.14.0 | 2026-01 | Database cleanup on exit, rate limiter bounds |
| 0.13.0 | 2025-12 | Initial comprehensive security model |