Status: Alpha
A minimal, hackable agentic framework engineered to run entirely locally with Ollama or BitNet.
Inspired by the architecture of OpenClaw, rebuilt from scratch for local-first operation.
| Document | Description |
|---|---|
| ARCH.md | Technical documentation for developers (directory structure, core design, orchestrator modes) |
| CHANGELOG.md | Version history and release notes (includes LocalClaw history) |
| TESTS.md | Benchmark results, model recommendations, and testing guide |
| CREDITS.md | Acknowledges every project, inspiration, API, model creator, and specification that makes AgentNova possible |
- Zero dependencies — Uses Python stdlib only (urllib for HTTP)
- Ollama + BitNet backends — Switch with
--backendflag - Dual API support — OpenResponses (
--api openre) and OpenAI Chat-Completions (--api openai) - Three-tier tool support — Native, ReAct, or none (auto-detected)
- Small model optimized — Fuzzy matching, argument normalization
- Built-in security — Path validation, command blocklist, SSRF protection
- Multi-agent orchestration — Router, pipeline, and parallel modes
- Soul Spec v0.5 — Persona packages with progressive disclosure
- ACP v1.0.5 integration — Agent Control Panel for monitoring and control
- AgentSkills spec — Skill loading with SPDX license validation
- Thinking models support — Automatic handling of qwen3, deepseek-r1 thinking mode
# Latest Development Release
pip install git+https://github.com/VTSTech/AgentNova.git --force-reinstall
# Last Stable (as stable as Alpha can be) Release
pip install agentnova# Run a single prompt
agentnova run "What is 15 * 8?" --tools calculator
# Interactive chat
agentnova chat -m qwen2.5:0.5b --tools calculator,shell
# Autonomous agent mode
agentnova agent -m qwen2.5:7b --tools calculator,shell,write_file
# Use OpenAI Chat-Completions API
agentnova chat -m qwen2.5:0.5b --api openai
# List available models
agentnova models
# List available tools
agentnova toolsfrom agentnova import Agent
from agentnova.tools import make_builtin_registry
# Create tools
tools = make_builtin_registry().subset(["calculator", "shell"])
# Create agent
agent = Agent(
model="qwen2.5:0.5b",
tools=tools,
backend="ollama",
)
# Run
result = agent.run("What is 15 * 8?")
print(result.final_answer)
print(f"Completed in {result.total_ms:.0f}ms")from agentnova.backends import get_backend
from agentnova.core.types import ApiMode
# Use Chat-Completions mode with streaming
backend = get_backend("ollama", api_mode=ApiMode.OPENAI)
for chunk in backend.generate_completions_stream(
model="qwen2.5:0.5b",
messages=[{"role": "user", "content": "Hello!"}],
response_format={"type": "json_object"}
):
print(chunk["delta"], end="", flush=True)from agentnova.skills import validate_spdx_license, parse_compatibility
# Validate SPDX license identifier
valid, msg = validate_spdx_license("MIT") # (True, "Valid SPDX identifier: MIT")
valid, msg = validate_spdx_license("Custom") # (False, "Unknown license...")
# Parse compatibility requirements
compat = parse_compatibility("python>=3.8, ollama")
# Returns: {"python": ">=3.8", "runtimes": ["ollama"], "frameworks": []}from agentnova import Agent, Orchestrator, AgentCard
orchestrator = Orchestrator(mode="router")
# Register specialized agents
orchestrator.register(AgentCard(
name="math_agent",
description="Handles mathematical calculations",
capabilities=["calculate", "math", "compute"],
tools=["calculator"],
))
orchestrator.register(AgentCard(
name="file_agent",
description="Handles file operations",
capabilities=["read", "write", "file"],
tools=["read_file", "write_file"],
))
# Route tasks to appropriate agent
result = orchestrator.run("Calculate 15 * 8 and save to file")AgentNova supports three levels of tool use:
- Native — Models with built-in function calling (qwen2.5, llama3.1+, mistral, granite, functiongemma)
- ReAct — Text-based tool use via reasoning prompts (qwen2.5-coder, qwen3)
- None — Pure reasoning without tools
Tool support is auto-detected by running agentnova models --tool-support. Results are cached in ~/.cache/agentnova/tool_support.json.
# Test and cache tool support for all models
agentnova models --tool-support
# Re-test (ignore cache)
agentnova models --tool-support --no-cacheYou can also force ReAct mode:
agent = Agent(model="qwen2.5:0.5b", force_react=True)Configured model families with optimized prompts:
- qwen2.5 — Native tool support, excellent performance
- llama3.1/3.2/3.3 — Native tool support
- mistral/mixtral — Native tool support
- gemma2/gemma3 — ReAct mode, special prompting
- granite/granitemoe — Native tool support
- phi3 — Native tool support
- deepseek — Native with
<think/>tag handling
Built-in security for safe operation:
- Command blocklist — Blocks dangerous shell commands (rm, sudo, etc.)
- Path validation — Prevents access to sensitive directories
- SSRF protection — Blocks requests to local/internal URLs
- Injection detection — Detects shell injection patterns
Environment variables:
# Backend URLs
OLLAMA_BASE_URL=https://your-ollama-server.com # Default: http://localhost:11434
BITNET_BASE_URL=http://localhost:8765 # BitNet server URL
BITNET_TUNNEL=https://your-tunnel.com # Alternative BitNet URL
ACP_BASE_URL=http://localhost:8766 # ACP server URL
# Agent settings
AGENTNOVA_BACKEND=ollama # Default backend: ollama or bitnet
AGENTNOVA_MODEL=qwen2.5:0.5b # Default model
AGENTNOVA_MAX_STEPS=10 # Maximum reasoning steps
AGENTNOVA_DEBUG=false # Enable debug outputCheck current configuration:
agentnova config
agentnova config --urls # Show only URLs| Option | Description |
|---|---|
--api openre|openai |
API mode: OpenResponses (default) or OpenAI Chat-Completions |
--response-format text|json |
Response format (Chat-Completions mode) |
--truncation auto|disabled |
Truncation behavior for long responses |
--soul <path> |
Load Soul Spec persona package |
--soul-level 1-3 |
Progressive disclosure level |
--num-ctx <tokens> |
Context window size (default: 4096) |
--timeout <seconds> |
Request timeout (default: 120) |
--acp |
Enable ACP (Agent Control Panel) logging |
--acp-url <url> |
ACP server URL |
The localclaw command is provided for backward compatibility:
# Both work identically
localclaw run "What is 2+2?"
agentnova run "What is 2+2?"AgentNova includes a comprehensive suite of tests for validating agent capabilities across reasoning, knowledge, and tool usage:
# Basic agent test (no tools)
python -m agentnova.examples.00_basic_agent
# Quick 5-question diagnostic
python -m agentnova.examples.01_quick_diagnostic
# Tool usage tests (calculator, shell, datetime, file, python_repl)
python -m agentnova.examples.02_tool_test
# Logic and reasoning tests (BBH-style)
python -m agentnova.examples.03_reasoning_test
# GSM8K math benchmark (50 questions)
python -m agentnova.examples.04_gsm8k_benchmark
# Common sense reasoning (BIG-bench)
python -m agentnova.examples.05_common_sense
# Causal reasoning (BIG-bench)
python -m agentnova.examples.06_causal_reasoning
# Logical deduction (BIG-bench)
python -m agentnova.examples.07_logical_deduction
# Reading comprehension
python -m agentnova.examples.08_reading_comprehension
# General knowledge (BIG-bench)
python -m agentnova.examples.09_general_knowledge
# Implicit reasoning
python -m agentnova.examples.10_implicit_reasoning
# Analogical reasoning
python -m agentnova.examples.11_analogical_reasoning| Test | Questions | Focus |
|---|---|---|
| Basic Agent | 1 | Single prompt, no tools |
| Quick Diagnostic | 5 | Calculator tool, multi-step reasoning |
| Tool Test | 10 | Calculator, shell, datetime, file, python_repl tools |
| Reasoning Test | 14 | Logic, deduction, patterns, spatial |
| GSM8K Benchmark | 50 | Math word problems |
| Common Sense | 25 | Physical properties, everyday reasoning |
| Causal Reasoning | 25 | Cause and effect relationships |
| Logical Deduction | 25 | Formal logic puzzles |
| Reading Comprehension | 25 | Passage-based Q&A |
| General Knowledge | 25 | Science, history, geography |
| Implicit Reasoning | 25 | Unstated assumptions and inference |
| Analogical Reasoning | 25 | Pattern matching and analogies |
| Model | Score | Time | Tool Support |
|---|---|---|---|
| functiongemma:270m | 5/5 (100%) | ~20s | native |
| granite4:350m | 5/5 (100%) | ~50s | native |
| qwen2.5:0.5b | 5/5 (100%) | 38s | native |
| qwen2.5-coder:0.5b | 5/5 (100%) | 93s | native |
| qwen3:0.6b | 5/5 (100%) | 70s | react |
| deepseek-r1:1.5b | 5/5 (100%) | ~305s | native |
All tested models achieve 100% on the Quick Diagnostic. Native models are ~2x faster than ReAct models due to direct API tool calling.
# Install dev dependencies
pip install -e ".[dev]"
# Run unit tests
pytest
# Format code
black agentnova
ruff check agentnovaMIT License - See LICENSE file for details.
VTSTech — https://www.vts-tech.org
Contributions welcome!