This is a proof of concept for an AI-powered hedge fund. The goal of this project is to explore the use of AI to make trading decisions. This project is for educational purposes only and is not intended for real trading or investment.
This system employs several agents working together:
- Aswath Damodaran Agent - The Dean of Valuation, focuses on story, numbers, and disciplined valuation
- Ben Graham Agent - The godfather of value investing, only buys hidden gems with a margin of safety
- Bill Ackman Agent - An activist investor, takes bold positions and pushes for change
- Cathie Wood Agent - The queen of growth investing, believes in the power of innovation and disruption
- Charlie Munger Agent - Warren Buffett's partner, only buys wonderful businesses at fair prices
- Michael Burry Agent - The Big Short contrarian who hunts for deep value
- Mohnish Pabrai Agent - The Dhandho investor, who looks for doubles at low risk
- Peter Lynch Agent - Practical investor who seeks "ten-baggers" in everyday businesses
- Phil Fisher Agent - Meticulous growth investor who uses deep "scuttlebutt" research
- Rakesh Jhunjhunwala Agent - The Big Bull of India
- Stanley Druckenmiller Agent - Macro legend who hunts for asymmetric opportunities with growth potential
- Warren Buffett Agent - The oracle of Omaha, seeks wonderful companies at a fair price
- Valuation Agent - Calculates the intrinsic value of a stock and generates trading signals
- Sentiment Agent - Analyzes market sentiment and generates trading signals
- Fundamentals Agent - Analyzes fundamental data and generates trading signals
- Technicals Agent - Analyzes technical indicators and generates trading signals
- Risk Manager - Calculates risk metrics and sets position limits
- Portfolio Manager - Makes final trading decisions and generates orders
Note: the system does not actually make any trades.
This project is for educational and research purposes only.
- Not intended for real trading or investment
- No investment advice or guarantees provided
- Creator assumes no liability for financial losses
- Consult a financial advisor for investment decisions
- Past performance does not indicate future results
By using this software, you agree to use it solely for learning purposes.
- How to Install
- OpenClaw Skill Install
- Environment Configuration
- How to Run
- How to Contribute
- Feature Requests
- License
Before you can run the AI Hedge Fund, install the Python and frontend dependencies first.
git clone https://github.com/AceInAndroid/ai-hedge-fund.git
cd ai-hedge-fundThis project uses Poetry and supports Python 3.11 and 3.12. The recommended version is 3.11.
If Python is not installed yet, one practical way is to install it with pyenv.
Install pyenv first:
brew install pyenvInstall Python 3.11:
pyenv install 3.11.11
pyenv local 3.11.11
python3 --versionInstall Python 3.12:
pyenv install 3.12.9
pyenv local 3.12.9
python3 --versionIf poetry is not installed yet, install it first:
curl -sSL https://install.python-poetry.org | python3 -If the shell still cannot find poetry, add Poetry to your PATH and reopen the terminal:
export PATH="$HOME/.local/bin:$PATH"Then install project dependencies:
poetry env use python3
poetry installIf poetry install fails, check these common issues first:
poetry: command not found: install Poetry with the command above, then reopen the terminal.- Python version mismatch: make sure
python3 --versionis compatible with this project. - Broken environment from an earlier install: try
poetry env remove --alland then runpoetry installagain.
The web UI lives in app/frontend.
cd app/frontend
npm install
cd ../..This repository can also be packaged as an OpenClaw-installable skill.
bash scripts/install-openclaw-skill.shThis installs the skill to:
~/.agents/skills/ai-hedge-fund
bash scripts/package-openclaw-skill.shThe packaged skill directory will be created at:
dist/openclaw-skill/ai-hedge-fund
You can then copy or symlink it manually:
mkdir -p ~/.agents/skills
cp -R dist/openclaw-skill/ai-hedge-fund ~/.agents/skills/- The packaged skill contains
SKILL.md,SKILL.toon,agents/openai.yaml, and the requiredreferences/files. - The installed skill assumes the active workspace is still the
ai-hedge-fundrepository root. - Runtime execution still uses this repository's own
scripts/entrypoints. - To expose the skill contract to external controllers, run:
bash scripts/export-skill-manifest.shcp .env.example .envThe .env file must be created in the project root directory, at the same level as README.md, pyproject.toml, and src/.
Example root path:
ai-hedge-fund/
├── .env
├── .env.example
├── README.md
├── pyproject.toml
├── src/
└── app/
You can confirm you are in the project root with:
pwd
ls -la .env .env.exampleHow to open .env:
- VS Code:
code .env
- macOS terminal editor:
nano .env
- Linux terminal editor:
nano .env
- Windows PowerShell:
notepad .env
If you use nano, edit the file, then press Ctrl+O to save and Ctrl+X to exit.
At least one LLM provider must be configured, otherwise the agents cannot run.
You can use any one of these methods:
- Official OpenAI
- Official Anthropic
- OpenAI-compatible API
- Anthropic-compatible API
- LM Studio local server
All environment variable examples below must be written into the root .env file you just created.
Each line uses this format:
KEY=valueDo not wrap values in JSON. Do not add commas at the end of lines.
Example:
OPENAI_API_KEY=sk-xxxxx
OPENAI_API_BASE=https://api.openai.com/v1OPENAI_API_KEY=your-openai-api-key
OPENAI_API_BASE=https://api.openai.com/v1ANTHROPIC_AUTH_TOKEN=your-dashscope-api-key
ANTHROPIC_BASE_URL=https://coding.dashscope.aliyuncs.com/apps/anthropic
ANTHROPIC_MODEL=qwen3.5-plusOPENAI_COMPATIBLE_API_KEY=your-compatible-api-key
OPENAI_COMPATIBLE_BASE_URL=https://your-openai-compatible-host/v1
OPENAI_COMPATIBLE_MODEL=your-model-nameANTHROPIC_COMPATIBLE_API_KEY=your-compatible-api-key
ANTHROPIC_COMPATIBLE_BASE_URL=https://your-anthropic-compatible-host/v1
ANTHROPIC_COMPATIBLE_MODEL=your-model-nameLM_STUDIO_BASE_URL=http://127.0.0.1:1234/v1
LM_STUDIO_API_KEY=lm-studio
LM_STUDIO_MODEL=your-local-model-nameThis is a valid example of what the root .env file can look like:
# LLM provider: DashScope Anthropic-compatible
ANTHROPIC_AUTH_TOKEN=your-dashscope-api-key
ANTHROPIC_BASE_URL=https://coding.dashscope.aliyuncs.com/apps/anthropic
ANTHROPIC_MODEL=qwen3.5-plus
# Financial data
FINANCIAL_DATASETS_API_KEY=your-financial-datasets-api-key
# A-share price fallback order
PRICE_DATA_SOURCES=financial_datasets,akshare,baostock,tushare,tencent,xueqiu,baidu
TUSHARE_TOKEN=your-tushare-tokenAfter editing .env, save the file and restart the CLI command or web backend so the new values are loaded.
If another system already prepared detailed market or fundamental data, you can pass it directly to the scripts and skip repeated fetching.
Use:
--data-file ./sample-data.jsonAdd:
--data-onlywhen you want to disable all external fetching and use only the supplied file.
Example:
bash scripts/run-analysis.sh 600519.SH,000001.SZ --data-file ./sample-data.json --data-onlyThe JSON file can contain per-ticker prices, financial_metrics, line_items, company_news, insider_trades, and market_cap. See the skill reference file references/preloaded-data.md for the full schema.
The CLI no longer requires you to choose from a built-in model catalog first.
Current behavior:
- CLI reads the configured provider from
.env - CLI reads the configured model name from the matching
*_MODELvariable - If only one provider is configured, it is selected automatically
- If multiple providers are configured, CLI asks you to choose among the configured providers only
- If a provider is configured but its
*_MODELvalue is missing, CLI asks you to type the model name manually
For example, if your .env contains:
ANTHROPIC_AUTH_TOKEN=YOUR_API_KEY
ANTHROPIC_BASE_URL=https://coding.dashscope.aliyuncs.com/apps/anthropic
ANTHROPIC_MODEL=qwen3.5-plusthen running:
poetry run python src/main.py --ticker AAPL,MSFT,NVDAwill use:
- provider:
Anthropic - model:
qwen3.5-plus
without asking you to pick from built-in Claude or GPT model lists.
For US tickers, the existing FINANCIAL_DATASETS_API_KEY flow remains the main source.
FINANCIAL_DATASETS_API_KEY=your-financial-datasets-api-keyFor mainland China tickers, the project now supports price fallback across:
financial_datasetsaksharebaostocktusharetencentxueqiubaidu
Use this configuration in .env if you want to control the fallback order:
PRICE_DATA_SOURCES=financial_datasets,akshare,baostock,tushare,tencent,xueqiu,baidu
TUSHARE_TOKEN=your-tushare-tokenNotes:
TUSHARE_TOKENis only required iftushareis included inPRICE_DATA_SOURCES.Akshare,Baostock, andTushareare declared inpyproject.toml; runpoetry installafter pulling the latest changes.- The A-share fallback currently applies to price data. Financial metrics, company news, and insider trades still use the original financial data flow.
If you use the web application:
.envis the easiest way to set default configuration for local development.- The Settings page in the web UI can also save provider keys, URLs,
PRICE_DATA_SOURCES, andTUSHARE_TOKEN. - Web-saved settings are passed into the backend request and can override missing environment values.
Where to configure what:
- Want defaults for local development: edit the root
.envfile. - Want to change configuration from the browser: open the Web app, then go to
Settings. - Want both: keep stable defaults in
.env, and use the Settings page for temporary or per-environment adjustments.
You can run the AI Hedge Fund directly from the terminal for research, scripting, or debugging.
Before selecting analysts, you can inspect the built-in capability catalog:
bash scripts/list-agents.shFor JSON output that another controller or script can consume:
bash scripts/list-agents.sh --format jsonFor a full external-skill manifest that another controller can consume directly:
bash scripts/export-skill-manifest.shThis catalog includes:
- selectable analyst agents
- always-on system agents such as
risk_managerandportfolio_manager - each agent's strategy family, analysis method, execution mode, best-fit use case, A-share readiness, and data requirements
- a repo-managed external interface where callers do not need to supply LLM keys or model names by default
poetry run python src/main.py --ticker AAPL,MSFT,NVDAIf your .env already contains model settings, CLI uses them directly.
Example with DashScope Anthropic-compatible:
ANTHROPIC_AUTH_TOKEN=YOUR_API_KEY
ANTHROPIC_BASE_URL=https://coding.dashscope.aliyuncs.com/apps/anthropic
ANTHROPIC_MODEL=qwen3.5-plusRun:
poetry run python src/main.py --ticker AAPL,MSFT,NVDACLI will use qwen3.5-plus automatically.
With external preloaded data:
bash scripts/run-analysis.sh 600519.SH,000001.SZ --data-file ./sample-data.json --data-onlypoetry run python src/main.py --ticker 600519.SH,000001.SZYou can optionally pass a time range:
poetry run python src/main.py --ticker AAPL,MSFT,NVDA --start-date 2024-01-01 --end-date 2024-03-01If you want to use local models through Ollama:
poetry run python src/main.py --ticker AAPL,MSFT,NVDA --ollamapoetry run python src/backtester.py --ticker AAPL,MSFT,NVDABacktester also supports --start-date, --end-date, and --ollama.
--tickeruses a comma-separated list.- A-share tickers can be written as
600519.SH,000001.SZ, or six-digit codes such as600519. - Make sure your
.envis configured before running CLI commands. --model-providerand--modelare optional overrides for CLI runs.- If
.envalready definesANTHROPIC_MODEL,OPENAI_COMPATIBLE_MODEL,LM_STUDIO_MODEL, and similar variables, CLI uses those values directly.
Example override:
poetry run python src/main.py --ticker AAPL,MSFT,NVDA --model-provider anthropic --model qwen3.5-plusExample with direct injected data:
bash scripts/run-analysis.sh 600519.SH,000001.SZ --data-file ./sample-data.json --data-onlyExample with explicit analyst selection:
bash scripts/run-analysis.sh KO,AXP --analysts ben_graham,warren_buffett,valuation_analystThe web application is the easiest way to switch models, edit flows, and manage saved settings.
macOS / Linux:
bash app/run.shWindows:
app\run.batBackend:
poetry run uvicorn app.backend.main:app --reload --host 127.0.0.1 --port 8000Frontend:
cd app/frontend
npm run devAfter startup:
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000 - Swagger docs:
http://localhost:8000/docs
- Open the Settings page and fill in your LLM/data-source configuration.
- Choose model provider and model for each node if needed.
- Enter tickers and date range.
- Run the analysis or the backtest from the UI.
You can still find app-specific details in app/README.md.
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
Important: Please keep your pull requests small and focused. This will make it easier to review and merge.
If you have a feature request, please open an issue and make sure it is tagged with enhancement.
This project is licensed under the MIT License - see the LICENSE file for details.