The terminal-native AI coding assistant that actually ships.
An AI pair programmer that lives in your terminal. Supports Claude (Anthropic) and GPT-5 (OpenAI). No IDE lock-in, no subscription tiers, no bloat—just you, the model of your choice, and your codebase.
Example landing page created from a one-line prompt.
99.3% edit success rate. North's deterministic edit tools with exact-match verification mean edits land correctly the first time. No fuzzy matching, no silent failures.
One-shots production-ready code. Complex React components, full API endpoints, beautiful landing pages—North builds them in a single pass. The kind of output that takes other tools 10+ iterations.
Direct API access. You bring your own API key (Anthropic or OpenAI). No middleman pricing, no usage caps, no "you've hit your daily limit." Pay only for what you use at provider rates.
200K context that manages itself. Real-time context tracking with visual indicators (🟢 green < 60%, 🟡 yellow 60-85%, 🔴 red > 85%). Auto-summarization kicks in at 92% context usage, compressing conversation history into structured summaries while preserving recent messages. No manual context pruning, no "start a new chat" interruptions.
Terminal-native speed. No Electron overhead, no browser tabs, no VS Code plugin lifecycle. North launches instantly and runs lean.
- Ask Mode (
Tabto toggle): Read-only exploration. The model can search, read files, and analyze—but can't modify anything. Perfect for understanding unfamiliar codebases. - Agent Mode: Full access to edit and shell tools. The model proposes, you approve.
Attach files to your messages with @ mentions—just like Cursor and Claude Code:
@src/components/Button.tsx Can you add an icon prop?
Start typing @ and North shows a fuzzy-matched list of project files (respecting .gitignore). Use Tab or Enter to attach, or Space/Esc to cancel and type a literal @. Attached files are automatically injected as context with a preview (first 30 lines) and symbol outline.
The status line shows your current mode with a color-coded badge: [ASK] in blue, [AGENT] in green. Context usage appears on the right with a real-time percentage meter.
Every file edit shows an inline diff before writing. Every shell command requires explicit permission. You stay in control.
┌─ Editing src/components/Button.tsx ────────────────┐
│ - export const Button = ({ label }) => ( │
│ + export const Button = ({ label, icon }) => ( │
│ <button className={styles.button}> │
│ + {icon && <Icon name={icon} />} │
│ {label} │
│ </button> │
│ ); │
├────────────────────────────────────────────────────┤
│ [a] Accept [y] Always [r] Reject │
└────────────────────────────────────────────────────┘
Press y once to auto-accept all future edits in a session. Or build a shell command allowlist so trusted operations (bun test, npm run build) run without prompts.
Switch between models on the fly—even across providers:
/model opus-4.5 # Switch to Claude Opus 4.5
/model sonnet-4 # Switch to Claude Sonnet 4 (default)
/model gpt-5.1-codex # Switch to GPT-5.1 Codex
/model gpt-5-mini # Switch to GPT-5 Mini for speed
Anthropic models: Sonnet 4, Opus 4, Opus 4.1, Sonnet 4.5, Haiku 4.5, Opus 4.5
OpenAI models: GPT-5.1, GPT-5.1 Codex, GPT-5.1 Codex Mini, GPT-5.1 Codex Max, GPT-5, GPT-5 Mini, GPT-5 Nano
Drop your .cursor/rules/*.mdc files in and North automatically loads them. Same project context, different interface.
On first run in a new project, North offers to learn your codebase. It runs 10 discovery passes covering architecture, conventions, domain vocabulary, data flow, dependencies, build workflows, hotspots, common tasks, and safety rails. The resulting profile is stored at ~/.north/projects/<hash>/profile.md and automatically injected into every conversation.
Use /learn anytime to re-learn the project after major changes.
| Command | Usage | Description |
|---|---|---|
/model [name] |
/model opus-4.5 or /model |
Switch model (shows picker if no argument). Supports all Anthropic and OpenAI models. Selection persists across sessions. |
/mode [ask|agent] |
/mode ask or /mode |
Switch conversation mode (shows picker if no argument). Also toggleable via Tab key. |
/thinking [on|off] |
/thinking on or /thinking |
Toggle extended thinking on/off for Claude models that support it. |
/costs |
/costs |
Show cost breakdown dialog by model and provider (session + all-time). |
/learn |
/learn |
Learn or relearn the project codebase. Overwrites existing profile. |
/summarize [--keep-last N] |
/summarize --keep-last 10 |
Compress conversation history into structured summary, keeping last N messages verbatim (default: 10). |
/new |
/new |
Start fresh conversation (clears transcript and summary, preserves shell session). |
/conversations |
/conversations |
Open picker to switch between saved conversations. |
/resume <id> |
/resume abc123 |
Switch to a specific conversation by ID. |
/help |
/help |
List all available commands with descriptions. |
/quit |
/quit |
Exit North cleanly. |
Commands can be mixed with regular messages: /model sonnet-4 Can you help me refactor this?
Every conversation is automatically persisted, so you can pick up where you left off. North stores conversations at ~/.north/conversations/ using append-only event logs for crash safety.
CLI subcommands:
north # Start a new conversation
north resume # Open picker to select from recent conversations
north resume abc123 # Resume a specific conversation by ID
north conversations # List all saved conversations with metadata
north list # Alias for north conversationsIn-session commands:
Use /conversations to open a picker and switch conversations, or /resume <id> to jump directly to a specific conversation. Your current conversation is saved automatically—switching doesn't lose progress.
Each conversation remembers its transcript, rolling summary, model selection, and the project it was started in.
Download the latest release from GitHub Releases:
| Platform | Binary |
|---|---|
| macOS (Apple Silicon) | north-darwin-arm64 |
| macOS (Intel) | north-darwin-x64 |
| Linux (x64) | north-linux-x64 |
chmod +x north-darwin-arm64
mv north-darwin-arm64 /usr/local/bin/northSet your API key(s):
export ANTHROPIC_API_KEY="sk-ant-..." # For Claude models
export OPENAI_API_KEY="sk-..." # For GPT modelsRun:
northPoint at any repo:
north --path /path/to/repoRequires Bun:
git clone https://github.com/timanthonyalexander/north.git
cd north
bun install
bun run devBuild standalone binaries:
bun run build:binary # current platform
bun run build:binary:mac-arm # Apple Silicon
bun run build:binary:mac-x64 # Intel Mac
bun run build:binary:linux # Linux x64Enter— Send messageShift+EnterorCtrl+J— Add newline@— Start file mention (fuzzy search project files)Tab— Cycle modes (ask → agent) or accept autocomplete suggestionUp/Down— Navigate autocomplete suggestionsSpace(during file autocomplete) — Cancel file mention, type literal@Esc— Close autocompleteCtrl+C— Cancel operation (when processing) or exit (when idle)
Diff Review (file edits):
a— Accept this edit onlyy— Always (enable auto-accept for all future edits)r— Reject this edit
Shell Review (commands):
r— Run this command oncea— Always (add to allowlist, no future prompts for this command)y— Auto all (auto-approve all future shell commands in this project)d— Deny this command
Command Review (e.g., model selection):
Up/Down— Navigate optionsEnter— SelectEsc— Cancel
Learning Prompt (first run):
y— Accept (learn project)n— Decline (skip learning)
| Tool | Purpose |
|---|---|
list_root |
List repository root entries |
find_files |
Search files by glob pattern (e.g., *.tsx, **/*.test.ts) |
search_text |
Text/regex search with ripgrep acceleration. Supports file+lineRange scoping |
read_file |
Read file content with line ranges and smart context modes (imports/full) |
get_line_count |
Quick file size check before reading large files |
get_file_symbols |
Extract symbols (functions, classes, types) without reading full file |
get_file_outline |
Hierarchical structure outline with line ranges |
read_readme |
Auto-detect and read README files |
detect_languages |
Analyze language composition by extension and size |
hotfiles |
Find frequently modified files via git history |
expand_output |
Retrieve full output from cached digested tool results |
find_code_block |
Find code blocks (functions, classes) containing specific text |
read_around |
Read context window around an anchor string |
find_blocks |
Get structural map (coordinates) without content for HTML/CSS/JS files |
North includes specialized tools for efficiently navigating large files without reading entire contents:
- Check size first with
get_line_countto determine if special handling is needed - Understand structure using
get_file_symbols(functions, classes) orget_file_outline(hierarchical view) - Find targets with
search_textscoped to specific files and line ranges - Read strategically using
read_filewith targeted line ranges and optional context modes - Jump to place with
find_code_blockto locate functions/classes containing text - Get context with
read_aroundfor focused windows around anchor strings
This approach reduces token usage by 60-80% when working with large files.
| Tool | Purpose |
|---|---|
edit_replace_exact |
Replace exact text matches (deterministic, no fuzzy matching) |
edit_insert_at_line |
Insert content at specific line number |
edit_apply_batch |
Apply multiple edits atomically (all-or-nothing) |
edit_after_anchor |
Insert content after a line containing anchor text |
edit_before_anchor |
Insert content before a line containing anchor text |
edit_replace_block |
Replace content between two anchor markers |
edit_by_anchor |
Unified anchor-based editing (insert before/after, replace line/block) |
| Tool | Purpose |
|---|---|
shell_run |
Execute shell commands with 60s timeout. Build an allowlist to skip approval prompts for trusted commands |
New files are created using a streaming-to-disk protocol rather than tool calls. The model outputs file contents directly in its response:
<NORTH_FILE path="src/components/Button.tsx">
export const Button = ({ label }) => (
<button>{label}</button>
);
</NORTH_FILE>
Why streaming? Provider timeouts (~90 seconds) can interrupt large file generation. Tool calls buffer in memory and lose all content on timeout. Direct-to-disk streaming preserves partial content and supports auto-continuation.
How it works:
- Model outputs
<NORTH_FILE path="...">tag in response - Content is written directly to disk as it streams (no memory buffering)
- On
</NORTH_FILE>close, a diff review is triggered - Accept: file already written, nothing more to do
- Reject: file is deleted from disk
Auto-continuation: If the provider times out mid-file, North detects the incomplete block, sends a continuation prompt with context (last 30 lines), and the model resumes with <NORTH_FILE mode="append">. This repeats until complete or max retries (3) exceeded.
All tools respect .gitignore. Output is automatically truncated to prevent context overflow.
Global config (~/.config/north/config.json):
selectedModel— persisted model selection across sessions
Conversations (~/.north/conversations/):
<id>.jsonl— append-only event log per conversation (crash-safe)<id>.snapshot.json— optional snapshot for fast resumeindex.json— conversation metadata for listing
Project profiles (~/.north/projects/<hash>/):
profile.md— learned project context (architecture, conventions, workflows)declined.json— marker if learning was declined
Project config (.north/ at your repo root):
allowlist.json— pre-approved shell commandsautoaccept.json— auto-accept edit settings (pressyin any diff review to enable)
Logs: ~/.local/state/north/north.log (JSON-lines format)
Search is slow? Install ripgrep for 10-100x faster searches: brew install ripgrep or apt install ripgrep. North falls back to pure JS implementation if ripgrep isn't available.
Edit tool fails? North's edit tools require exact text matches including whitespace. The model will re-read the file and retry—usually self-corrects within 1-2 attempts.
Shell command times out? Commands have a 60-second timeout by default. Each command runs in a fresh bash process using Bun's built-in Bun.spawn() API.
Context overflow? Auto-summarization triggers at 92% context usage. You can also manually run /summarize to compress the conversation history at any time.
Model not available? Ensure you've set the correct API key:
- Claude models require
ANTHROPIC_API_KEY - GPT models require
OPENAI_API_KEY
bun run dev # run North in development
bun run dev --log-level debug # verbose logging
bun run build # build JS bundle
bun run typecheck # TypeScript type checking
bun run lint # ESLint linting
bun run lint:fix # ESLint with auto-fix
bun run format # Format code with Prettier
bun run format:check # Check Prettier formatting
bun run check # all checks (typecheck + lint + format:check)
bun test # run test suite
bun test --watch # run tests in watch modeCode Quality:
- ESLint with TypeScript, React, and React Hooks plugins
- Prettier with 4-space indentation, double quotes, semicolons
- Pre-commit hooks (type check + lint + format verification)
- Enable hooks:
bun run prepareorgit config core.hooksPath .githooks
Architecture: docs/implementation.md
| Category | North | Claude Code | Cursor | Aider | OpenAI Codex CLI | Gemini CLI | GitHub Copilot | Cline | Windsurf |
|---|---|---|---|---|---|---|---|---|---|
| Pricing | Direct API keys (you pay provider) | Claude Pro/Max subscription (and other auth options exist) | Subscription tiers; Pro includes usage and "unlimited Auto" routing | Open-source; you pay model/provider usage | Uses OpenAI; CLI is open-source | Free tier quotas + paid tiers (Google account based) | Subscription plans (Free/Pro/Pro+/Business/Enterprise); CLI included in paid | Extension is free; pay inference via your provider (or Cline provider) | Subscription credit plans (Pro/Teams/Enterprise) |
| Environment | Terminal | Terminal CLI | Desktop IDE | Terminal | Terminal | Terminal | IDE + terminal CLI tool | VS Code extension | Desktop IDE |
| Context | 200K + auto-summary | Model-dependent (Claude) | Model-dependent; plan mentions "maximum context windows" | Model-dependent; you choose provider/model | Model-dependent; agent runs locally and uses chosen model | Up to 1M token context (Gemini 2.5 Pro) | Model-dependent; plan-gated premium requests | Model-dependent; depends on chosen provider/model | "Fast Context" marketed; model-dependent |
| Control | Approve every edit/command | Permission rules are configurable and can be remembered | Agent Review exists; can manage multi-file diffs | Git-based workflow and diffs are core | Supports approval modes and review flows | Built-in tools (file ops, shell); user-driven CLI flow | Chat/agent features with plan request allowances; CLI is available | Agentic edits inside VS Code; depends on configuration | IDE agent workflow; plan-based usage |
| Transparency | Full diff review | Permission + settings model; CLI-first visibility | Review UI for diffs | Very transparent via patches/commits | Review + diff-oriented workflows | CLI output + open-source; tool actions visible | Mixed (suggestions, chat, agent features) | Visible edits in-editor; still an IDE extension | IDE-based; visibility depends on workflow |
Legend: ✓ = yes, ✗ = no, ~ = partial / limited / depends on plan or model
| Capability | North | Claude Code | Codex CLI | Gemini CLI | Aider | Cline | Cursor | Windsurf | Copilot CLI |
|---|---|---|---|---|---|---|---|---|---|
| Terminal-native interactive UI | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ |
| BYOK (bring your own API key) | ✓ | ✓ | ✓ | ~ | ✓ | ✓ | ✓ | ~ | ✗ |
| Multi-provider switching | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ~ | ✗ |
| Explicit approvals for writes/shell | ✓ | ✓ | ✓ | ✓ | ~ | ✓ | ~ | ~ | ~ |
| Fine-grained allowlist controls | ✓ | ✓ | ✓ | ~ | ~ | ✓ | ~ | ~ | ✗ |
| Deterministic edit primitives | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Inline diff review (first-class UX) | ✓ | ~ | ~ | ~ | ✓ | ✓ | ✓ | ✓ | ✗ |
| Cursor rules ingestion | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| @ file mentions | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ~ | ✗ |
| No vendor subscription required | ✓ | ~ | ~ | ✓ | ✓ | ✓ | ~ | ~ | ✗ |
| Open-source core | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| 1M-token context option | ✗ | ✗ | ✗ | ✓ | ~ | ✗ | ✗ | ✗ | ✗ |
| MCP / external tool servers | ✗ | ✗ | ✓ | ✓ | ~ | ✓ | ~ | ~ | ✗ |
| GitHub PR agent workflows | ✗ | ✗ | ✓ | ~ | ✗ | ~ | ✗ | ✗ | ✗ |
Where North stands out:
North is the only tool that combines terminal-native, multi-provider BYOK, and deterministic edit primitives with exact-match verification. The safety model doesn't require trust: every risky operation shows an inline diff with explicit approval. Plus familiar UX features like @ file mentions for quick context attachment. You're not locked to one editor, one vendor, or one AI provider.
North's roadmap opportunities:
- MCP plugin ecosystem (Codex CLI, Gemini CLI, and Cline have mature plugin support)
- PR automation workflows (Codex CLI leads here with tag bots and automated reviews)
- Ultra-large context (Gemini CLI offers 1M-token context windows with Gemini 2.5 Pro)
Logs record events and metadata (tool names, durations, token counts) but not file contents or prompts. Your messages go directly to the provider's API (Anthropic or OpenAI)—no intermediary servers, no data collection.
North. Vibe coding peak.
