Cleanup: stale enabled models, refresh_items scheduling, README auto-config docs#581
Merged
dale-wahl merged 2 commits intoollama_managementfrom Mar 10, 2026
Merged
Cleanup: stale enabled models, refresh_items scheduling, README auto-config docs#581dale-wahl merged 2 commits intoollama_managementfrom
dale-wahl merged 2 commits intoollama_managementfrom
Conversation
…ADME docs Co-authored-by: dale-wahl <32108944+dale-wahl@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Add Ollama setup via Docker and manage LLM models UI
Cleanup: stale enabled models, refresh_items scheduling, README auto-config docs
Mar 10, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Three follow-up fixes from PR review on the Ollama/LLM management feature.
Changes
ollama_manager.py— reconcilellm.enabled_modelson refreshAfter updating
llm.available_models,refresh_models()now diffs againstllm.enabled_modelsand removes any model keys no longer present in the available list. Prevents stale enabled entries accumulating silently across delete/re-pull cycles.refresh_items.py— disable no-op schedulingensure_jobis commented out soItemUpdaterno longer enqueues itself every 60 seconds to do nothing. Worker class is preserved for future use with an explanatory comment.docker/README.md— clarify auto- vs. manual LLM configSplits "Configuring 4CAT to use Ollama" into two subsections:
docker_setup.pyauto-sets LLM fields on first startup when using the sidecar override; users can skip manual steps."Using an external Ollama server" now cross-references the manual config steps explicitly.
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.