____ _ ______ __
/ __ \___ ____ ____ ___ _(_)___ / ____/___ ____/ /__
/ /_/ / _ \/ __ \/ __ `/ / / / / __ \ / / / __ \/ __ / _ \
/ ____/ __/ / / / /_/ / /_/ / / / / / / /___/ /_/ / /_/ / __/
/_/ \___/_/ /_/\__, /\__,_/_/_/ /_/ \____/\____/\__,_/\___/
/____/
AI-powered coding assistant CLI and VS Code extension using Ollama
Penguin Tech Inc © 2025
- 🤖 Multi-Agent System - ChatAgent orchestrates specialized Explorer/Executor agents
- 🔍 Multi-Engine Research - 5 search engines with MCP protocol support
- 🧠 Persistent Memory - mem0 integration for context across sessions
- 📚 Documentation RAG - Auto-detects your project's languages and libraries, fetches official documentation, and uses it for accurate, syntax-correct answers
- 🔌 MCP Integration - Extend with N8N, Flowise, and custom MCP servers
- 🌐 Client-Server Mode - gRPC server for remote Ollama and team deployments
- ⚡ GPU Optimized - Smart model switching for RTX 4060 Ti (8GB VRAM) or higher
- 🐧 Cross-Platform - Works on Linux, macOS, and Windows
| Language | Detection | Doc Sources |
|---|---|---|
| Python | pyproject.toml, requirements.txt, *.py |
Official docs + PyPI libraries |
| JavaScript/TypeScript | package.json, tsconfig.json |
MDN, npm packages |
| Go | go.mod, *.go |
go.dev, pkg.go.dev |
| Rust | Cargo.toml, *.rs |
docs.rs, crates.io |
| OpenTofu/Terraform | *.tf, *.tofu, .terraform.lock.hcl |
OpenTofu docs, provider registries |
| Ansible | ansible.cfg, playbook.yml, requirements.yml |
Ansible docs, Galaxy collections |
# Install
pip install -e .
penguincode setup
# Pull required models
ollama pull llama3.2:3b qwen2.5-coder:7b nomic-embed-text
# Run
penguincode chatVS Code Extension: Download VSIX from Releases
# Start gRPC server (connects to local Ollama)
python -m penguincode.server.main
# Or use Docker
docker compose up -d
# Connect from client
penguincode chat --server localhost:50051See Architecture Documentation for remote deployment with TLS and authentication.
- Usage Guide - Installation, configuration, and usage
- Configuration Reference - Complete config.yaml reference
- Architecture - Client-server architecture and deployment modes
- Agent Architecture - ChatAgent, Explorer, Executor, Planner
- Tool Support - Ollama models with native tool calling
- MCP Integration - Extend with N8N, Flowise, and custom servers
- Memory - Persistent memory with mem0 integration
- Documentation RAG - Project-aware documentation indexing
- Security - Authentication, TLS, and secure code generation
- Contributing - How to contribute
AGPL-3.0 - See LICENSE for details
Support: support.penguintech.io | Homepage: www.penguintech.io
