Skip to content

FULL System Prompts & Internal Tools Converted into Knowledge: Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0.

Notifications You must be signed in to change notification settings

Satharva2004/prometheus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

16 Commits
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Prometheus

Architect Intelligent Agents using the System Prompts of Titans.
Visit Promethues

Views

๐Ÿš€ Overview

Prometheus is an open-source, RAG-powered system prompt engineering engine. It allows developers to generate enterprise-grade system instructions for their AI agents by reverse-engineering the architectural patterns ("DNA") of industry giants like Google Gemini, OpenAI GPT-4, and Anthropic Claude.

Instead of guessing how to write a good system prompt, Prometheus uses a Retrieval-Augmented Generation (RAG) pipeline to fetch the most relevant "elite" prompt structures from a database of 100+ production system prompts and adapts them to your specific use case.


๐Ÿ› ๏ธ Tech Stack

Frontend

Backend

  • Framework: FastAPI (Python)
  • Inference: Groq (Llama 3 70B)
  • Vector DB: Pinecone
  • Embeddings: sentence-transformers/all-mpnet-base-v2
  • Cloud: AWS

๐Ÿง  Validated Architecture & Pipeline

Prometheus operation is based on a three-stage "Architectural Synthesis" pipelinetps://cdnjs.cloudflare.com/ajax/libs/emoji-datasource-apple/14.0.0/img/apple/64/1f3af.png" alt="๐ŸŽฏ" width="24" /> The user starts by describing their agent. The system uses Groq (Llama-3) to act as a "Requirements Analyst", instantly analyzing the request and generating 3-4 targeted clarifying questions (e.g., about tone, constraints, edge cases) to refine the user's intent.

2. RAG Retrieval & Metadata Filtering ๐Ÿ”

Once the intent is refined, the system embeds the user's query using SentenceTransformers. It then queries a Pinecone vector index containing the "DNA" of 100+ best-in-class system prompts.

  • Top-k Retrieval: We fetch the top 10 most structurally relevant system prompts.
  • Metadata Rich: Each retrieved prompt contains metadata about its origin (e.g., "Anthropic Claude 3 System Prompt", "Perplexity Search Prompt"), allowing the synthesizer to understand why it was retrieved.

3. Architectural Synthesis ๐Ÿงฌ

The "Architectural Intelligence" engine receives:

  • The user's goal.
  • Calibrated answers.
  • The retrieved "DNA" (structural frameworks) from the elite prompts.

It doesn't just copy; it synthesizes. It adapts the techniques (e.g., XML tagging from Claude, chain-of-thought enforcement from Gemini) to create a bespoke, production-ready system prompt for your specific agent.

System Architecture Diagram

sequenceDiagram
    participant User
    participant Client as ๐Ÿ’ป React Frontend
    participant Server as โš™๏ธ FastAPI Backend
    participant Embed as ๐Ÿงฎ Embedding Model
    participant VectorDB as ๐ŸŒฒ Pinecone DB
    participant LLM as โšก Groq (Llama-3)

    Note over User, Client: Phase 1: Intent Calibration

    User->>Client: Inputs vague agent description
    Client->>Server: POST /analyze-query
    Server->>LLM: "Analyze intent & generate clarifying questions"
    LLM-->>Server: JSON: {questions: [...]}
    Server-->>Client: Returns interactive form
    
    User->>Client: Selects specific nuances
    Client->>Server: POST /generate-final-prompt

    Note over Server, VectorDB: Phase 2: RAG Retrieval

    Server->>Embed: Embed(user_query)
    Embed-->>Server: Vector (768d)
    Server->>VectorDB: Query(vector, top_k=10, namespace="promptsdb")
    VectorDB-->>Server: [Metadata: Claude 3, GPT-4, Perplexity...]

    Note over Server, LLM: Phase 3: Architectural Synthesis

    Server->>Server: Construct Context (Intent + Answers + "Elite DNA")
    Server->>LLM: "Synthesize novel prompt architecture"
    LLM-->>Server: Production-Ready System Prompt
    Server-->>Client: Returns final_prompt
    Client->>User: Displays "Liquid" Result
Loading

๐ŸŽจ Key Features & Components

"Liquid" UI Design

A custom-built design system featuring glassmorphism, fluid animations, and a premium "fintech-dark" aesthetic.

  • Liquid Glass Button: A custom interactive button component with physics-based shimmer effects.
  • Breathing Orb: A context-aware loading state that visualizes the AI "thinking" process.
  • Infinite Marquee: Seamlessly loops through the logos of supported AI models.

Top Reference Models

We have reverse-engineered architectural patterns from:

Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0.

(And other Open Sourced System Prompts, Internal Tools & AI Models)

Anthropic OpenAI Google Gemini Claude DeepSeek Cursor Devin GitHub Copilot Grok Jules Lovable Mistral Perplexity Replit Windsurf


๐Ÿ“ฆ usage

Prerequisites

  • Node.js & npm
  • Python 3.9+
  • Pinecone API Key
  • Groq API Key
  • Clerk Publishable Key

Frontend Setup

cd prompt-genie
npm install
npm run dev

Backend Setup

cd backend
pip install -r requirements.txt
uvicorn main:app --reload

๐Ÿ“œ License

This project is open-source and available under the MIT License.

๐Ÿค Connect

About

FULL System Prompts & Internal Tools Converted into Knowledge: Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published