Skip to content

bold-ronin/Lightweight-Agentic-RAG-Service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UPDATE

🚀 Agentic Lead Intelligence RAG

A lightweight, production-ready Agentic RAG (Retrieval-Augmented Generation) service that analyzes startup signals (hiring, funding, remote readiness) using semantic search + LLM reasoning.

Built with:

• FastAPI
• FAISS (vector search)
• Sentence Transformers
• Groq LLM (Llama 3.3 70B)
• Docker-ready architecture

🧠 What This Project Does

This service:

  1. Embeds startup-related signals into a FAISS vector store
  2. Retrieves the most relevant context using semantic similarity
  3. Sends structured context to an LLM
  4. Returns structured JSON analysis

Example analysis output:

{
  "startup_name": "ExampleAI",
  "hiring_signal": true,
  "remote_possible": true,
  "funding_stage": "Seed",
  "reasoning": "Raised seed round and actively hiring Flutter developer.",
  "source_url": "https://example.com/post"
}

🏗 Architecture

User Query
    ↓
Retriever (FAISS + Embeddings)
    ↓
Context Assembly
    ↓
Groq LLM (Structured JSON Output)
    ↓
FastAPI Response

Core Components:

• embedding_service.py → Generates sentence embeddings
• vector_store.py → FAISS index + persistence
• retrieval_service.py → Semantic retrieval logic
• llm_service.py → Groq structured JSON generation
• main.py → FastAPI endpoints

⚙️ Tech Stack

Python Python 3.10+
FastAPI FastAPI
FAISS FAISS (CPU)
SentenceTransformers SentenceTransformers (all-MiniLM-L6-v2)
Groq Groq LLM API
Docker Docker

###🚀 Getting Started

1️⃣ Clone

git clone https://github.com/bold-ronin/Lightweight-Agentic-RAG-Service.git
cd agentic-lead-rag

2️⃣ Create Virtual Environment

python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate

3️⃣ Install Dependencies

pip install -r requirements.txt

4️⃣ Configure Environment

Create .env file:

LLM_API_KEY=your_api_key_here [I used Groq you an whatever you want]

5️⃣ Run the API

uvicorn app.main:app --reload

Open:

http://127.0.0.1:8000/docs

Use /analyze endpoint.

🐳 Docker Support

Dockerized FastAPI agent for RAG tasks.

Run locally with Docker

Build image:

docker build -t agentic-lead-rag .

Run container:

docker run -p 8000:8000 --env-file .env agentic-lead-rag

Visit: http://localhost:8000/docs

📦 Features

  • Structured JSON enforcement from LLM
  • Async Groq integration
  • Semantic search retrieval
  • Source URL tracking
  • FAISS index persistence
  • Dockerized for portability

🔬 Example Query

"Startup hiring Flutter developer remotely after seed funding"

Returns structured intelligence analysis based on stored signals.

🧭 Roadmap

  • Live Reddit & X ingestion
  • LinkedIn signal scraping
  • Scheduled background refresh
  • Frontend dashboard
  • Multi-source ingestion pipeline
  • Deployment (Render / Railway)
  • Usage-based monetization

🎯 Why This Matters

This is not a chatbot.

It is a structured intelligence engine designed to extract startup signals for: • Freelancers • Recruiters • Founders • Investors

🧱 Author

Built by Naol — AI-focused mobile + systems engineer exploring Agentic architectures and applied intelligence systems.

About

Agentic RAG service that analyzes startup signals using semantic search + LLM reasoning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors