Skip to content

Sebulba46/document-RAG-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Guide for making RAG document pipeline for Open-WebUI

This guide is made to help you deploy your own document RAG pipline with Open-WebUI and Local LLM.

Setup:

  • VLLM running in docker
  • Open-WebUI and pipelines container running in docker
  • Local small 7B Mistral
  • All docker compose

All docker composes are included

To deploy pipelines use this steps:

Deploy pipelines container using docker-compose

List all containers:

docker ps --format '{{.Names}}' 

Find your pipeline container. Then enter into it, change name of the container if needed:

docker exec -it open-webui-pipelines-1 /bin/bash

Then install all neseccary libraries:

pip install docx2txt llama-index llama-index-core llama-index-llms-openai-like llama-index-readers-file pymupdf llama-index-embeddings-huggingface

apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

Then edit your valves:

image

Edit PATH for your pdf document:

image

Choose embeddings that suit your needs:

image

Edit model prompt as you like:

Screenshot_20250307_170526

And you are good to go. Just upload modified py file to the Open-WebUI and use your RAG.

About

This guide is made to help you deploy your own document RAG pipline with Open-WebUI and Local LLM.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages