This repository contains practical Python implementations for interacting with locally hosted Large Language Models (LLMs) using Ollama. It demonstrates how to utilize both Ollama's native REST API and its OpenAI-compatible endpoints to build entirely offline, privacy-first AI scripts.
The project consists of three main implementation strategies:
Demonstrates how to swap out paid OpenAI endpoints for local Ollama endpoints using the official openai Python SDK.
- Connects to
http://localhost:11434/v1. - Sends a system and user prompt to the local
qwen-nothinkmodel. - Uses the
richlibrary to render the response in beautifully formatted Markdown right in the terminal.
Shows how to interact with Ollama natively without external AI SDKs.
- Uses the
requestslibrary to POST data to the/api/generateendpoint. - Features an interactive command-line loop where the user can input custom prompts.
- Handles JSON parsing and error management.
A powerful script that allows a local VLM to "see" and describe local images.
- Automatically scans an
images/directory for.pngand.webpfiles. - Converts images into Base64 Data URIs.
- Passes the images alongside a prompt to the local AI for detailed descriptions.
- Renders the descriptions in terminal Markdown.
- Python 3.x
- Ollama: For running the local AI models (
qwen-nothink). - OpenAI Python SDK: Used as a standardized API wrapper for local models.
- Requests: For native HTTP API calls.
- Rich: For terminal styling and markdown rendering.
-
Install Ollama
Ensure you have Ollama installed and running on your machine. -
Pull the required models
You will need to pull the models referenced in the scripts (e.g.,qwen-nothinkor your model of choice):ollama run qwen-nothink
-
Set up the Python Environment
Create and activate a virtual environment:python -m venv venv # On Windows: venv\Scripts\activate # On Mac/Linux: source venv/bin/activate
-
Install Dependencies
Install the required Python packages from therequirements.txtfile:pip install -r requirements.txt
With the virtual environment active and Ollama running locally, simply execute any of the scripts:
python basic.py
python ollama_api.py
python image_parser.py

