Summary
Expose Flare's LoRA adapter loading/merging through BrowserAI's API, enabling task-specific model specialization with tiny (~5-50MB) adapter files.
Use case
const ai = new BrowserAI({ engine: 'flare' });
await ai.loadModel('llama-3.2-1b-flare');
// Load a code-specialized LoRA adapter
await ai.loadAdapter('code-assistant-lora', {
url: 'https://huggingface.co/.../adapter.safetensors',
alpha: 16
});
// Now the model is specialized for code
const response = await ai.generateText('Write a Python function...');
Flare API
Flare already supports LoRA:
FlareEngine.merge_lora(adapter_bytes) — load SafeTensors + merge at load time
FlareEngine.merge_lora_with_alpha(adapter_bytes, alpha) — with custom alpha
BrowserAI API additions
interface BrowserAI {
loadAdapter(id: string, options?: { url: string, alpha?: number }): Promise<void>;
// Note: unmerge requires reloading the base model
}
Adapter registry
Pre-registered adapters (community contributions welcome):
- Code assistant LoRA
- Creative writing LoRA
- Instruction-following LoRA
- Math reasoning LoRA
Depends on
Related
Summary
Expose Flare's LoRA adapter loading/merging through BrowserAI's API, enabling task-specific model specialization with tiny (~5-50MB) adapter files.
Use case
Flare API
Flare already supports LoRA:
FlareEngine.merge_lora(adapter_bytes)— load SafeTensors + merge at load timeFlareEngine.merge_lora_with_alpha(adapter_bytes, alpha)— with custom alphaBrowserAI API additions
Adapter registry
Pre-registered adapters (community contributions welcome):
Depends on
Related