Two brainwaves, one soundtrack β links two Emotiv EEG headsets over the network and fuses their emotional states into a single real-time AI-generated music stream.
Each participant's headset feeds into a local brain processor. The two emotion streams are merged on the host and sent to Google Lyria, which generates music that reflects the blended emotional state of both people in real time.
Person A β Emotiv headset βββΆ client_brain_processor.py ββ(HTTP)βββΆ β
βββΆ social_audio_service.py βββΆ Google Lyria βββΆ π΅
Person B β Emotiv headset βββΆ host brain_processor.py ββ(HTTP)βββΆ β
- Each device runs a brain processor that converts raw EEG β emotion state
social_audio_service.pycollects both states and runs an emotion fusion algorithm- The blended state is converted to a detailed music prompt (BPM, key, instruments, dynamics)
- Google Lyria streams the generated audio at 48 kHz stereo, < 500 ms latency
| Emotion | BPM | Key | Instruments |
|---|---|---|---|
| Happy | 120β140 | Major | Piano arpeggios, light percussion |
| Sad | 60β80 | Minor | Strings, sparse piano |
| Angry | 140β160 | Diminished | Distorted guitar, aggressive drums |
| Relaxed | 70β90 | Major pentatonic | Ambient pads, soft guitar |
16 emotion classes supported in total.
- Hardware: 2 Γ Emotiv EPOC X (Cortex SDK v3)
- EEG processing: Python Β· numpy Β· asyncio
- Services: FastAPI / Uvicorn
- Music: Google Lyria real-time (
lyria-realtime-exp) - Audio: sounddevice Β· pyaudio (48 kHz stereo)
pip install -r EEG/requirements.txt
# Host machine (also runs the audio service)
python EEG/host_main.py
# Client machine (second headset)
python EEG/client_main.py --host <HOST_IP>Requires two Emotiv headsets, Cortex App on each machine, and a Google API key with Lyria access.