Skip to content

JeroenVanGorsel/agentic-LLM-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agentic LLM Demo

This project showcases various 'modes' of modern LLMs, including chat, structured output, function calling, and MCP, and how to include these into larger workflows.

Features

  • Chat Interface: Basic chat interface.
  • Data Analysis: Demonstrated structured output.
  • Exchange Rate Agent (OpenAI Native): Demonstrates function calling using OpenAI's native API (/agent-openai).
  • Exchange Rate Agent (MCP): Demonstrates function calling using the Model Context Protocol pattern (/agent-mcp).
  • Exchange Rate Agent (Agent-to-Agent): Demonstrates a Manager Agent delegating tasks to a Sub-agent (/agent-a2a).
  • Reasoning Mode: Uses structured output to run a reasoning mode.

Agent Implementations

This project includes three versions of the "Exchange Rate Agent" to demonstrate different architectural patterns:

1. OpenAI Native (backend/routers/agent_openai.py)

This implementation uses OpenAI's specific Function Calling API directly.

  • Tool Definition: Tools are defined manually in the OpenAI JSON schema format within the router file.
  • Execution: The backend server calls the LLM, receives output that the LLM wants to call a tool, which is executed by the backend server itself, which then calls the LLM again with the tool output.
  • Pros: Simple for small projects, direct control.
  • Cons: Tightly coupled to OpenAI's API and the specific codebase.

2. Model Context Protocol (MCP) (backend/routers/agent_mcp.py)

This implementation demonstrates the Model Context Protocol (MCP) architecture.

  • Architecture: Uses a client-server model where the "MCP Client" (the router) connects to an external "MCP Server" (running on port 8001).
  • Tool Discovery: The client asks the server "what tools do you have?" (dynamic discovery).
  • Execution: The backend only routes and doesn't execute. The backend receives output that the LLM wants to call a tool, a call is made to the MCP server, which runs the tool, and send the output back to the backend, which then calls the LLM again with the tool output.
  • Pros: Decouples the LLM logic from the tool logic. The same "MCP Server" could be used by Claude, ChatGPT, or any other MCP-compliant client without code changes.

3. Agent-to-Agent: Handoff Pattern (backend/routers/agent_handoff.py)

This demonstrates the Swarm Pattern popularized by OpenAI Swarm.

  • Concept: "Handoff" vs "Delegation". In delegation, Manager waits for Worker. In Handoff, Manager transiers the user to the Worker and leaves the loop.
  • Implementation: The "Tool" function returns an Agent object instead of a string. The router loop detects this object and switches the active System Prompt and Tools to the new Agent for the next turn.
  • Why: Allows for fluid, multi-turn conversations where the active agent changes based on context (e.g., Triage -> Support -> Sales).

4. Agent-to-Agent: Agent Protocol (backend/routers/agent_protocol.py)

This demonstrates compliance with the open standard Agent Protocol.

  • Concept: Standardization of how agents talk to each other over HTTP.
  • Implementation: The router acts as a Client creating a Task (POST /ap/v1/agent/tasks) and executing Steps. The backend simulates a compliant Protocol Server.
  • Why: Interoperability. An agent built this way can talk to any other agent (AutoGPT, generic solvers) that implements the standard spec.

5. Agent-to-Agent: Framework pattern (backend/routers/agent_framework.py)

This demonstrates the architecture used by AutoGen (Microsoft) or CrewAI.

  • Concept: Agents are Objects (classes) that "Send" and "Receive" messages.
  • Implementation: Defines UserUserProxy and AssistantAgent classes. The conversation is driven by an initiate_chat method that loops until a termination condition is met.
  • Why: Logic verification. This structure is best for simulations where agents need to debate or iterate on code without user intervention.

Project Structure

The project is divided into three main components:

  • Backend: Built with Python and FastAPI, handling API requests and LLM interactions.
  • Frontend: Built with React and Vite, providing the user interface.
  • MCP Server: A standalone FastAPI server that hosts the tools.

Directory Structure

.
├── backend/
│   ├── routers/          # API routes for different agents/modes
│   ├── main.py           # Application entry point
│   ├── dependencies.py   # Dependency injection
│   └── requirements.txt  # Python dependencies
├── frontend/
│   ├── src/
│   │   ├── components/   # React components
│   │   ├── App.jsx       # Main application component
│   │   └── App.css       # Global styles
│   └── package.json      # Node.js dependencies
└── mcp_server/
    ├── main.py           # MCP Server entry point
    └── requirements.txt  # MCP Server dependencies

Getting Started

Prerequisites

  • Python 3.8+
  • Node.js and npm
  • Azure OpenAI API access

Backend Setup

  1. Navigate to the backend directory:

    cd backend
  2. Install the required Python packages:

    pip install -r requirements.txt
  3. Configure your environment variables for Azure OpenAI access. You need to set up a .env file containing the following variables to connect with the OpenAI API:

  4. Run the backend server:

    uvicorn main:app --reload

MCP Server Setup

  1. Open a new terminal and navigate to the mcp_server directory:

    cd mcp_server
  2. Install the required Python packages:

    pip install -r requirements.txt
  3. Run the MCP server (runs on port 8001):

    python main.py

Frontend Setup

  1. Navigate to the frontend directory:

    cd frontend
  2. Install the dependencies:

    npm install
  3. Start the development server:

    npm run dev

Usage

Paste the environment variables to connect to the OpenAI API (used in this demo) in the .env file in the /backend folder.

Once both the backend and frontend are running, open the browser and navigate to the URL provided by Vite (usually http://localhost:5173). You can then interact with the different modes available in the application.

About

A demo app showcasing different architectural patterns commonly used when creating chat apps or agentic workflows with an LLM

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors