Multi-LLM-Backend Support #1
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat(llm): integrate Google Gemini as a new provider
Refactor the LLM client to support multiple backends by introducing a BaseLLMClient abstract class.
This change allows for seamless switching between OpenAI-compatible APIs and the new Gemini client via environment variables.
Add implementation using the google-generativeai SDK.
Create a factory to dynamically select the provider.
Update configuration to handle both OpenAI and Gemini API keys.
Refactor web and CLI applications to use the new client factory.
Add comprehensive unit tests for and the factory.
Update with instructions for configuring Gemini.
Introduce a file to maintain repository cleanliness.