Skip to content

Add Ollama Integration Support#1

Open
moah0911 wants to merge 4 commits into
Prateekkp:mainfrom
moah0911:main
Open

Add Ollama Integration Support#1
moah0911 wants to merge 4 commits into
Prateekkp:mainfrom
moah0911:main

Conversation

@moah0911
Copy link
Copy Markdown

This PR introduces support for Ollama, enabling users to run and interact with local LLMs seamlessly within SnapBase. The integration maintains compatibility with existing workflows while expanding the range of supported inference backends.

Changes included:

  • Added Ollama client initialization and configuration options
  • Implemented Ollama-compatible request/response handling
  • Updated documentation with setup instructions for Ollama
  • Minor refactoring for better modularity of inference backends
  • This enhancement allows SnapBase users to leverage powerful open-source models locally without relying on cloud-based APIs improving privacy, cost-efficiency, and offline usability.

qwen-intl and others added 4 commits December 26, 2025 14:03
- Added `llm/ollama_generator.py` with Ollama connection testing and SQL generation functions
- Created `test_ollama.py` for end-to-end validation of Ollama integration
- Updated `app/cli.py` to support dynamic LLM provider switching between NVIDIA and Ollama
- Modified `llm/__init__.py` to expose new Ollama functions in public API
- Enhanced `llm/generator.py` with Ollama-specific logic and fallback handling
- Updated `main.py` to include Ollama provider management and runtime validation
- Updated `pyproject.toml` to reflect version bump and added Ollama keyword
- Improved `.gitignore` to exclude additional temporary and IDE files
- Maintained consistent error handling and user feedback across both LLM providers
…-0c1431da4906

Update from task b8d7a3a6-68cc-4abd-8a0d-0c1431da4906
…-0c1431da4906

Update from task b8d7a3a6-68cc-4abd-8a0d-0c1431da4906
@Prateekkp
Copy link
Copy Markdown
Owner

I appreciate your work integrating Ollama support.

I noticed a few concerns:
• .gitignore was removed — this safeguards the repo; please restore it.
• The Pylance config seems specific to your environment; can you justify or move it to local settings instead?
• Integration is currently hard-coded to run only with llama. Can we make the model configurable so other models can be selected?

Once these are addressed, I’ll be happy to continue the review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants