This project showcases various 'modes' of modern LLMs, including chat, structured output, function calling, and MCP, and how to include these into larger workflows.
- Chat Interface: Basic chat interface.
- Data Analysis: Demonstrated structured output.
- Exchange Rate Agent (OpenAI Native): Demonstrates function calling using OpenAI's native API (
/agent-openai). - Exchange Rate Agent (MCP): Demonstrates function calling using the Model Context Protocol pattern (
/agent-mcp). - Exchange Rate Agent (Agent-to-Agent): Demonstrates a Manager Agent delegating tasks to a Sub-agent (
/agent-a2a). - Reasoning Mode: Uses structured output to run a reasoning mode.
This project includes three versions of the "Exchange Rate Agent" to demonstrate different architectural patterns:
This implementation uses OpenAI's specific Function Calling API directly.
- Tool Definition: Tools are defined manually in the OpenAI JSON schema format within the router file.
- Execution: The backend server calls the LLM, receives output that the LLM wants to call a tool, which is executed by the backend server itself, which then calls the LLM again with the tool output.
- Pros: Simple for small projects, direct control.
- Cons: Tightly coupled to OpenAI's API and the specific codebase.
This implementation demonstrates the Model Context Protocol (MCP) architecture.
- Architecture: Uses a client-server model where the "MCP Client" (the router) connects to an external "MCP Server" (running on port 8001).
- Tool Discovery: The client asks the server "what tools do you have?" (dynamic discovery).
- Execution: The backend only routes and doesn't execute. The backend receives output that the LLM wants to call a tool, a call is made to the MCP server, which runs the tool, and send the output back to the backend, which then calls the LLM again with the tool output.
- Pros: Decouples the LLM logic from the tool logic. The same "MCP Server" could be used by Claude, ChatGPT, or any other MCP-compliant client without code changes.
This demonstrates the Swarm Pattern popularized by OpenAI Swarm.
- Concept: "Handoff" vs "Delegation". In delegation, Manager waits for Worker. In Handoff, Manager transiers the user to the Worker and leaves the loop.
- Implementation: The "Tool" function returns an
Agentobject instead of a string. The router loop detects this object and switches the active System Prompt and Tools to the new Agent for the next turn. - Why: Allows for fluid, multi-turn conversations where the active agent changes based on context (e.g., Triage -> Support -> Sales).
This demonstrates compliance with the open standard Agent Protocol.
- Concept: Standardization of how agents talk to each other over HTTP.
- Implementation: The router acts as a Client creating a
Task(POST/ap/v1/agent/tasks) and executingSteps. The backend simulates a compliant Protocol Server. - Why: Interoperability. An agent built this way can talk to any other agent (AutoGPT, generic solvers) that implements the standard spec.
This demonstrates the architecture used by AutoGen (Microsoft) or CrewAI.
- Concept: Agents are Objects (classes) that "Send" and "Receive" messages.
- Implementation: Defines
UserUserProxyandAssistantAgentclasses. The conversation is driven by aninitiate_chatmethod that loops until a termination condition is met. - Why: Logic verification. This structure is best for simulations where agents need to debate or iterate on code without user intervention.
The project is divided into three main components:
- Backend: Built with Python and FastAPI, handling API requests and LLM interactions.
- Frontend: Built with React and Vite, providing the user interface.
- MCP Server: A standalone FastAPI server that hosts the tools.
.
├── backend/
│ ├── routers/ # API routes for different agents/modes
│ ├── main.py # Application entry point
│ ├── dependencies.py # Dependency injection
│ └── requirements.txt # Python dependencies
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── App.jsx # Main application component
│ │ └── App.css # Global styles
│ └── package.json # Node.js dependencies
└── mcp_server/
├── main.py # MCP Server entry point
└── requirements.txt # MCP Server dependencies
- Python 3.8+
- Node.js and npm
- Azure OpenAI API access
-
Navigate to the backend directory:
cd backend -
Install the required Python packages:
pip install -r requirements.txt
-
Configure your environment variables for Azure OpenAI access. You need to set up a
.envfile containing the following variables to connect with the OpenAI API: -
Run the backend server:
uvicorn main:app --reload
-
Open a new terminal and navigate to the mcp_server directory:
cd mcp_server -
Install the required Python packages:
pip install -r requirements.txt
-
Run the MCP server (runs on port 8001):
python main.py
-
Navigate to the frontend directory:
cd frontend -
Install the dependencies:
npm install
-
Start the development server:
npm run dev
Paste the environment variables to connect to the OpenAI API (used in this demo) in the .env file in the /backend folder.
Once both the backend and frontend are running, open the browser and navigate to the URL provided by Vite (usually http://localhost:5173). You can then interact with the different modes available in the application.