This guide is made to help you deploy your own document RAG pipline with Open-WebUI and Local LLM.
- VLLM running in docker
- Open-WebUI and pipelines container running in docker
- Local small 7B Mistral
- All docker compose
All docker composes are included
Deploy pipelines container using docker-compose
List all containers:
docker ps --format '{{.Names}}' Find your pipeline container. Then enter into it, change name of the container if needed:
docker exec -it open-webui-pipelines-1 /bin/bashThen install all neseccary libraries:
pip install docx2txt llama-index llama-index-core llama-index-llms-openai-like llama-index-readers-file pymupdf llama-index-embeddings-huggingface
apt-get update && apt-get install ffmpeg libsm6 libxext6 -yThen edit your valves:
Edit PATH for your pdf document:
Choose embeddings that suit your needs:
Edit model prompt as you like:
And you are good to go. Just upload modified py file to the Open-WebUI and use your RAG.



