Your personal AI mascot that runs 100% locally. No cloud, no data leaving your machine.
- Auto-detects your hardware (GPU, VRAM, RAM, CPU)
- Picks the best AI model your system can run
- Downloads and runs it locally via Ollama
- Can execute system commands (with your permission)
- Fully private — everything stays on your PC
| Your Hardware | Model Selected |
|---|---|
| 16GB+ VRAM, 32GB+ RAM | Llama 3.1 70B |
| 8GB+ VRAM, 16GB+ RAM | Llama 3.1 8B |
| 4GB+ VRAM, 8GB+ RAM | Llama 3.2 3B |
| 2GB+ VRAM, 4GB+ RAM | Llama 3.2 1B |
| Anything else | TinyLlama |
# Install Ollama first
curl -fsSL https://ollama.com/install.sh | sh
# Clone and run
git clone https://github.com/zanicool/zbot.git
cd zbot
python3 zbot.pyYou > what's my disk usage?
ZBOT > Let me check that for you.
⚡ Run 'df -h'? [y/n] y
Type exit to quit.
- Linux
- Python 3.6+
- Ollama
- NVIDIA/AMD GPU recommended (CPU-only works but slower)
MIT