Skip to content

changty/echo

Repository files navigation

LLM "Spotlight" (Echo)

image

A Spotlight-style launcher for quick LLM actions. Cross-platform (macOS/Windows/Linux), built with Electron + HTML/JS and styled with Tailwind.

  • Pops up with a global hotkey (default Alt+Space)
  • Auto-reads text/image from the clipboard on open
  • Actions via hotkeys: Proofread, Translate → English, Translate → …, Summarize, Rewrite in style
  • Writes the LLM response back to the clipboard automatically
  • Works with OpenAI, OpenAI-compatible endpoints, or Ollama (local)

Prerequisites

  • Node.js 18+ and npm
  • An LLM provider (choose one):
    • OpenAI API key
    • OpenAI-compatible server (base URL + key)
    • Ollama running locally (e.g., ollama serve with a model like llama3.1:8b)

Install

git clone <this-repo> echo
cd echo
npm i

Create your config and env:

cp config.example.json config.json
# .env is NOT checked in
printf "OPENAI_API_KEY=\nOPENAI_COMPAT_KEY=\n" > .env

Edit config.json to select a provider and default model.


Develop

Tailwind builds CSS → Electron serves the app.

Terminal A (Tailwind watch):

npm run tw:dev

Terminal B (Electron):

npm run dev

Open the app with the global hotkey (Alt+Space by default). Use the ⚙︎ Settings button to change hotkey, provider, model, API base, and a default target language.


Keyboard Shortcuts

  • Mod+1 – Ask ("regular mode")
  • Mod+2 – Proofread
  • Mod+3 – Translate → English
  • Mod+4 – Translate → (asks for language; uses default if set)
  • Mod+5 – Summarize
  • Mod+6 – Rewrite in style (asks for style)
  • Ctrl/⌘ + Enter – Run the current action
  • Esc – Close the window
  • Clicking outside the window (blur) also hides it.

Providers

OpenAI

  • Set "provider": "openai" in config.json
  • Put your key in .envOPENAI_API_KEY=...
  • Configure model (e.g., gpt-4o-mini) and base (https://api.openai.com/v1)

OpenAI-Compatible

  • Set "provider": "openaiCompatible"
  • Configure openaiCompatible.apiBase (e.g., a self-hosted gateway)
  • Put your key in .envOPENAI_COMPAT_KEY=...

Ollama (local)

  • Set "provider": "ollama"
  • Ensure ollama is running (ollama serve)
  • Choose a local model (e.g., llama3.1:8b) and set ollama.host (usually http://localhost:11434)

Package / Distribute

Build Tailwind once, then package with electron-builder:

npm run tw:build
npm run dist

Artifacts for your platform will appear in dist/.


Project Structure

.
├─ main.js             # Electron main process (ESM)
├─ preload.cjs         # Preload (CommonJS) exposing window.api
├─ config.json         # Runtime settings (copy from config.example.json)
├─ src/
│  ├─ renderer.html    # UI shell
│  ├─ renderer.js      # UI logic & actions
│  ├─ tw.css           # Tailwind entry (source)
│  └─ styles.css       # Generated by Tailwind (do not edit)
└─ providers/
   ├─ providerManager.js
   ├─ openai.js
   └─ ollama.js

Troubleshooting

  • Window doesn’t react / buttons “do nothing”

    • Ensure preload.cjs is used in webPreferences.preload (path built with __dirname).
    • Open DevTools in dev: win.webContents.openDevTools({ mode: 'detach' })
    • Check the Console for errors.
  • “Unable to load preload script”

    • Confirm the file exists and that the preload path uses __dirname (not process.cwd()).
    • Keep preload as CommonJS (preload.cjs).
  • window.api is undefined in renderer

    • Preload didn’t load. Fix the preload path or syntax.
  • Global hotkey doesn’t work

    • Change it in Settings (⚙︎) or edit config.json and restart.
  • Ollama not responding

    • Verify ollama serve is running and the model is pulled: ollama run llama3.1:8b

Security Notes

  • contextIsolation: true in the BrowserWindow
  • Minimal CSP in renderer.html (script-src 'self')
  • API keys live in .env (never committed)

About

My personal AI "launcher" assistant for revising and translating texts.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors