Fix issues encountered in the Android build#370
Fix issues encountered in the Android build#370Leuconoe wants to merge 4 commits intoundreamai:mainfrom
Conversation
Leuconoe
commented
Nov 21, 2025
- Modified to allow model downloads at runtime when includeInBuild is false.
- Modified to enable loading grammar data in Android environments.
- Modified file transfer paths during build (antivirus issue).
- Modified exception handling for results in llama.cpp.
…etection when using the temporaryCachePath.
|
Thank you for the PR 🥇 !! |
| foreach (ModelEntry modelEntry in modelEntries) | ||
| { | ||
| if (!modelEntry.includeInBuild) continue; | ||
| if (!modelEntry.includeInBuild && string.IsNullOrEmpty(modelEntry.url)) continue; |
There was a problem hiding this comment.
if the model is not included in the build we shouldn't download it.
if you want to achieve the above behavior you can include the model in the build but select the "Download on Build" option in the LLM manager
There was a problem hiding this comment.
@amakropoulos So, does that mean it's not a feature that downloads at runtime?
There was a problem hiding this comment.
yes, it downloads at runtime.
what would you like to achieve?
There was a problem hiding this comment.
@amakropoulos Hello. I apologize in advance as I'm using a translator, and there may be some misunderstandings.
I needed a feature to download models at runtime via URL without including them in the build.
After reviewing my setup, I confirmed that the current version works correctly with the following configuration:
includeInBuild: set to falsedownloadOnStart: set to trueLLM.WaitUntilModelSetup()to wait for the download to complete
I believe the reason I proposed code modifications earlier was due to incorrect settings on my end.
Thank you for the review.
Additional question: I couldn't find the "Download on Build" option you mentioned. Did you perhaps mean the "Download On Start" option?
There was a problem hiding this comment.
yes, correct, I meant "Download On Start"
| protected SemaphoreSlim chatLock = new SemaphoreSlim(1, 1); | ||
| protected string chatTemplate; | ||
| protected ChatTemplate template = null; | ||
| protected Task grammarTask; |
There was a problem hiding this comment.
Great implementation for grammar! This is not needed anymore in release v3.0.0 because I load the grammar from the file directly and just keep it as a serialized string
| response = $"{{\"data\": [{responseArray}]}}"; | ||
| } | ||
| return getContent(JsonUtility.FromJson<Res>(response)); | ||
| try |
There was a problem hiding this comment.
good error catching - this is now handled from LlamaLib internally.
|
Yes. I'll work on a new branch and submit a pull request. |