Skip to content

A Rails agent framework for RubyLLM — define AI agents with prompts, schemas, caching, logging, cost tracking, and a built-in dashboard for monitoring LLM usage in production.

License

Notifications You must be signed in to change notification settings

adham90/ruby_llm-agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

335 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RubyLLM::Agents

RubyLLM::Agents

Production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents

Gem Version Ruby Rails License: MIT Documentation

Build intelligent AI agents in Ruby with a clean DSL, automatic execution tracking, cost analytics, budget controls, and a beautiful real-time dashboard. Supports OpenAI GPT-4, Anthropic Claude, Google Gemini, and more through RubyLLM.

Why RubyLLM::Agents?

  • Rails-Native - Seamlessly integrates with your Rails app: models, jobs, caching, and Hotwire
  • Production-Ready - Built-in retries, model fallbacks, circuit breakers, and budget limits
  • Full Observability - Track every execution with costs, tokens, duration, and errors
  • Workflow Orchestration - Compose agents into pipelines, parallel tasks, and conditional routers
  • Zero Lock-in - Works with any LLM provider supported by RubyLLM

Show Me the Code

# app/agents/search_intent_agent.rb
class SearchIntentAgent < ApplicationAgent
  model "gpt-4o"
  temperature 0.0

  param :query, required: true

  def user_prompt
    "Extract search intent from: #{query}"
  end

  schema do
    string :refined_query, description: "Cleaned search query"
    array :filters, of: :string, description: "Extracted filters"
  end
end

result = SearchIntentAgent.call(query: "red summer dress under $50")

result.content        # => { refined_query: "red dress", filters: ["color:red", "price:<50"] }
result.total_cost     # => 0.00025
result.total_tokens   # => 150
result.duration_ms    # => 850
# Multi-turn conversations
result = ChatAgent.call(
  query: "What's my name?",
  messages: [
    { role: :user, content: "My name is Alice" },
    { role: :assistant, content: "Nice to meet you, Alice!" }
  ]
)
# => "Your name is Alice!"
# Resilient agents with automatic retries and fallbacks
class ReliableAgent < ApplicationAgent
  model "gpt-4o"

  reliability do
    retries max: 3, backoff: :exponential
    fallback_models "gpt-4o-mini", "claude-3-5-sonnet"
    circuit_breaker errors: 10, within: 60, cooldown: 300
    total_timeout 30
  end

  param :query, required: true

  def user_prompt
    query
  end
end
# Vector embeddings for semantic search and RAG
# app/agents/embedders/document_embedder.rb
module Embedders
  class DocumentEmbedder < ApplicationEmbedder
    model "text-embedding-3-small"
    dimensions 512
    cache_for 1.week
  end
end

result = Embedders::DocumentEmbedder.call(text: "Hello world")
result.vector       # => [0.123, -0.456, ...]
result.dimensions   # => 512

# Batch embedding
result = Embedders::DocumentEmbedder.call(texts: ["Hello", "World", "Ruby"])
result.vectors      # => [[...], [...], [...]]
# Image generation, analysis, and pipelines
# app/agents/images/logo_generator.rb
module Images
  class LogoGenerator < ApplicationImageGenerator
    model "gpt-image-1"
    size "1024x1024"
    quality "hd"
    style "vivid"
    template "Professional logo design: {prompt}. Minimalist, scalable."
  end
end

result = Images::LogoGenerator.call(prompt: "tech startup logo")
result.url          # => "https://..."
result.save("logo.png")
# Workflow orchestration - sequential, parallel, routing in one DSL
class OrderWorkflow < RubyLLM::Agents::Workflow
  description "End-to-end order processing"
  timeout 60.seconds
  max_cost 1.50

  input do
    required :order_id, String
    optional :priority, String, default: "normal"
  end

  step :validate, ValidatorAgent
  step :enrich,   EnricherAgent, input: -> { { data: validate.content } }

  parallel :analysis do
    step :sentiment, SentimentAgent, optional: true
    step :classify,  ClassifierAgent
  end

  step :handle, on: -> { classify.category } do |route|
    route.billing    BillingAgent
    route.technical  TechnicalAgent
    route.default    GeneralAgent
  end

  step :format, FormatterAgent, optional: true
end

result = OrderWorkflow.call(order_id: "123")
result.steps[:classify].content  # Individual step result
result.total_cost                # Sum of all steps
result.success?                  # true/false

Features

Feature Description Docs
Agent DSL Declarative configuration with model, temperature, parameters, description Agent DSL
Execution Tracking Automatic logging with token usage, cost analytics, and fallback tracking Tracking
Cost Analytics Track spending by agent, model, tenant, and time period Analytics
Reliability Automatic retries, model fallbacks, circuit breakers with block DSL Reliability
Budget Controls Daily/monthly limits with hard and soft enforcement Budgets
Multi-Tenancy Per-tenant API keys, budgets, circuit breakers, and execution isolation Multi-Tenancy
Workflows Pipelines, parallel execution, conditional routers Workflows
Async/Fiber Concurrent execution with Ruby fibers for high-throughput workloads Async
Dashboard Real-time Turbo-powered monitoring UI Dashboard
Streaming Real-time response streaming with TTFT tracking Streaming
Conversation History Multi-turn conversations with message history Conversation History
Attachments Images, PDFs, and multimodal support Attachments
PII Redaction Automatic sensitive data protection Security
Content Moderation Input/output safety checks with OpenAI moderation API Moderation
Embeddings Vector embeddings with batching, caching, and preprocessing Embeddings
Image Operations Generation, analysis, editing, pipelines with cost tracking Images
Alerts Slack, webhook, and custom notifications Alerts

Quick Start

Installation

# Gemfile
gem "ruby_llm-agents"
bundle install
rails generate ruby_llm_agents:install
rails db:migrate

Configure API Keys

# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...

Generate an Agent

rails generate ruby_llm_agents:agent SearchIntent query:required

This creates app/agents/search_intent_agent.rb with the agent class ready to customize.

Mount the Dashboard

# config/routes.rb
mount RubyLLM::Agents::Engine => "/agents"

RubyLLM Agents Dashboard

Documentation

AI Agents: For comprehensive documentation optimized for AI consumption, see LLMS.txt

Note: Wiki content lives in the wiki/ folder. To sync changes to the GitHub Wiki, run ./scripts/sync-wiki.sh.

Guide Description
Getting Started Installation, configuration, first agent
Agent DSL All DSL options: model, temperature, params, caching, description
Reliability Retries, fallbacks, circuit breakers, timeouts, reliability block
Workflows Pipelines, parallel execution, routers
Budget Controls Spending limits, alerts, enforcement
Multi-Tenancy Per-tenant budgets, isolation, configuration
Async/Fiber Concurrent execution with Ruby fibers
Testing Agents RSpec patterns, mocking, dry_run mode
Error Handling Error types, recovery patterns
Moderation Content moderation for input/output safety
Embeddings Vector embeddings, batching, caching, preprocessing
Image Generation Text-to-image, templates, content policy, cost tracking
Dashboard Setup, authentication, analytics
Production Deployment best practices, background jobs
API Reference Complete class documentation
Examples Real-world use cases and patterns

Requirements

  • Ruby >= 3.1.0
  • Rails >= 7.0
  • RubyLLM >= 1.0

Contributing

Bug reports and pull requests are welcome at GitHub.

  1. Fork the repository
  2. Create your feature branch (git checkout -b my-feature)
  3. Commit your changes (git commit -am 'Add feature')
  4. Push to the branch (git push origin my-feature)
  5. Create a Pull Request

License

The gem is available as open source under the MIT License.

Credits

Built with love by Adham Eldeeb

Powered by RubyLLM

About

A Rails agent framework for RubyLLM — define AI agents with prompts, schemas, caching, logging, cost tracking, and a built-in dashboard for monitoring LLM usage in production.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published