Voice-first AI assistant for smart glasses
Say "Hey Mentra", ask a question, and get a concise spoken or displayed response.
See what you see. Search the web. Remember context.
Mentra AI is an intelligent voice assistant for smart glasses. It adapts to your hardware—whether your glasses have a HUD display, camera, or speakers—and delivers responses in the most appropriate format.
- Voice activation — Say "Hey Mentra" to start
- Vision — Answers questions about what you're seeing (camera glasses)
- Web search — Real-time search with concise summaries
- Context aware — Knows your location, time, weather, and conversation history
| Type | Input | Output |
|---|---|---|
| HUD + Mic | Voice | Text on display |
| Camera + Speaker + Mic | Voice + Camera | Spoken responses |
- Install MentraOS: get.mentraglass.com
- Install Bun: bun.sh
- Set up ngrok:
brew install ngrokand create a static URL
- Go to console.mentra.glass
- Sign in and click "Create App"
- Set a unique package name (e.g.,
com.yourName.mentraAI) - Enter your ngrok URL as "Public URL"
- Add microphone and camera permissions
# Install
git clone https://github.com/Mentra-Community/Mentra-AI.git
cd Mentra-AI
bun install
cp .env.example .env
# Configure .env with your credentials
# PORT, PACKAGE_NAME, MENTRAOS_API_KEY (required)
# GOOGLE_GENERATIVE_AI_API_KEY, GOOGLE_MAPS_API_KEY (optional)
# Start
bun run dev
# Expose via ngrok
ngrok http --url=<YOUR_NGROK_URL> 3000MIT