LLMFlow integrates GPT with LangChain, DeepSeek and LLama3.2 with Ollama to generate concise summaries of large-scale documents (research documents), reducing reading time and effort for academics. The project enhances accessibility and efficiency in reviewing literature.
This project allows users to upload a research paper or a document (word, txt, and PDF), generate a summary, and ask questions about its content using various Large Language Models (LLMs) such as Ollama, GPT, and DeepSeek.
- Upload document – Supports
.txt``.docx``.pdfformat. - Generate Summaries – Extracts key sections and provides a concise summary.
- Chat with the Paper – Ask questions and get AI-generated responses based on the document.
- Multiple LLMs Supported – Choose from Llama, GPT, or DeepSeek for text processing.
- Context-Aware Responses – Retrieves the most relevant sections from the document before answering.
- React.js – For building the interactive chat interface.
- Tailwind CSS – For styling and responsive design.
- JavaScript (ES6+) – Used for frontend logic.
- Fetch API – For making requests to the backend.
- React Hooks (
useState,useEffect,useRef) – For managing state and interactions.
- FastAPI – A modern Python web framework for handling API requests.
- Uvicorn – ASGI server to run the FastAPI app.
- Pydantic – Data validation for request bodies.
- CORS Middleware – Enables cross-origin requests.
- Ollama – Runs local LLMs like
Llama3.2for chat responses. - GPT (OpenAI GPT-4o) – (Optional) Used via LangChain for advanced processing.
- DeepSeek – (Optional) Another LLM used for extraction.
- PDFMiner – Extracts text from uploaded PDFs.
- FuzzyWuzzy – For text similarity matching to find relevant document sections.
- Regular Expressions (Regex) – To detect and structure research paper sections.
- shutil & os – For file handling.
- Logging – For error tracking and debugging.
git clone https://github.com/sabdulrahman/LLMFlow.git
cd LLMFlowpython -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`pip install -r requirements.txtOn windows open the run.bat file.
uvicorn main:app --reloadThe backend will be running at: http://localhost:8000
cd frontend
npm install
npm startThe frontend will be available at: http://localhost:3000
- Open the web interface in your browser at
http://localhost:3000. - Upload a PDF research paper.
- The system will generate a summary of the document.
- Type your question related to the document in the chat.
- The system retrieves relevant sections and generates an AI-powered response.
| Method | Endpoint | Description |
|---|---|---|
POST |
/upload-file |
Upload a research paper (PDF) and extract sections. |
POST |
/process-message |
Process user queries with the selected LLM. |
GET |
/ |
Check if the backend is running. |
To use GPT-based processing, create a .env file in the backend directory and add:
OPENAI_API_KEY=your_openai_api_key
- Enhanced Document Processing – Better PDF parsing and section extraction.
- Multi-Document Support – Upload and interact with multiple documents.
- Advanced Query Matching – Improve accuracy in retrieving document sections.
Contributions are welcome! Feel free to open an issue or submit a pull request.
MIT License. See LICENSE for more details.
