I saw that local embeddings / vector search were removed because of package size, model downloads, WASM / ONNX memory pressure, and reliability issues.
Would you consider bringing vector search back as an optional feature using external embedding APIs instead of bundling an embedding model inside CodeGraph?
For example, users could configure an OpenAI-compatible embeddings endpoint, Google Gemini Embeddings, or Cloudflare Workers AI embeddings.
Google Gemini API appears to provide a free tier for experimentation, so this may let users try semantic search without CodeGraph shipping or running the embedding model itself.
I saw that local embeddings / vector search were removed because of package size, model downloads, WASM / ONNX memory pressure, and reliability issues.
Would you consider bringing vector search back as an optional feature using external embedding APIs instead of bundling an embedding model inside CodeGraph?
For example, users could configure an OpenAI-compatible embeddings endpoint, Google Gemini Embeddings, or Cloudflare Workers AI embeddings.
Google Gemini API appears to provide a free tier for experimentation, so this may let users try semantic search without CodeGraph shipping or running the embedding model itself.