Lightweight simulated neural streaming dashboard with per-channel signal quality monitoring and incident logging.
- Purpose
- What You Get
- Architecture
- Workflow
- Signal Quality
- Incident Logging
- API Endpoints
- How to Run
- Screenshots
- Configuration
- Future Work
NeuroStream simulates a multi-channel neural acquisition device and demonstrates how signal quality can be monitored and logged in real time for engineering and research applications.
- Live neural signal visualization
- Per-channel quality assessment
- Incident logging with timestamps and diagnosis
- Exportable logs (CSV / JSON)
Backend
- Flask REST API
- SQLite database for samples and events
- Background monitoring thread
Frontend
- Vanilla JavaScript
- Chart.js for plotting
- Tab-based interface for quality and incidents
- Simulator generates neural samples.
- Samples stored in SQLite.
- Quality metrics computed in sliding windows.
- State transitions logged as incidents.
- Frontend polls APIs and renders charts and tables.
- good: signal within normal limits.
- degraded: moderate artifacts detected.
- bad: severe dropout, noise, or clipping.
- RMS (Root Mean Square): overall signal energy.
- Peak-to-Peak: amplitude range.
- Dropout Fraction: percentage of missing/zero samples.
- Line Noise Ratio: proportion of power at 60Hz.
When a channel enters degraded or bad state:
- start timestamp is recorded
- end timestamp when recovered
- duration computed
- diagnosis inferred from metric thresholds
Stored in SQLite table: events.
/health/latest/quality/events/export/events.csv/export/events.json
- From inside the backend folder, install dependencies:
pip install -r requirements.txt- Still inside the backend folder, run app:
python app.py- Open browser:
http://127.0.0.1:5000/
Edit config.py to adjust:
- sample rate
- number of channels
- artifact probabilities
- database path
NeuroStream can now replay real EEG from a public motor imagery dataset while keeping the same Flask API, SQLite storage, and frontend dashboard.
- Set
DATA_SOURCE = "replay"inbackend/config.py - Default replay dataset:
BNCI2014_001via MOABB - Default replay selection: subject 1, first session, first run
- Dashboard stays at 4 EEG channels by selecting
C3,Cz,C4,Pzwhen available, otherwise the first 4 EEG channels - Replay writes samples into the same
neural_datatable used by the simulator, so/latest,/quality, and/eventscontinue to work - The first version uses timed chunked replay from an MNE
Rawobject;mne-lslis included as an optional future hook, but the main app path still writes directly into SQLite for simplicity
Run replay mode:
cd backend
pip install -r requirements.txt
python app.pyNotes:
- The first dataset download can take time because MOABB fetches the source files on demand
- Replay loops by default when it reaches the end of the selected run
- If you want the old synthetic stream, switch
DATA_SOURCEback to"simulator"
The replay pipeline now includes a lightweight real-time baseline decoder.
- Training happens offline at startup from the same replay dataset/run used for streaming
- The first baseline is a left-vs-right motor imagery classifier using simple windowed EEG features
- Each rolling window computes alpha bandpower, beta bandpower, RMS, variance, and log power for the 4 dashboard channels
- A Linear Discriminant Analysis classifier predicts the latest class and confidence
- Predictions are stored in SQLite and exposed through
GET /prediction
Default decoding settings in backend/config.py:
DECODER_WINDOW_SECONDS = 1.0DECODER_STEP_SECONDS = 0.5DECODER_TMIN_SECONDS = 0.5DECODER_TMAX_SECONDS = 3.5
Run the decoder:
cd backend
python -m pip install -r requirements.txt
python app.pyNotes:
- Decoder output is only available in replay mode
- This is a baseline classifier for later comparison, not a production-ready BCI decoder
- Predictions update in near real time as replayed samples fill the rolling buffer
- Real hardware integration
- Advanced artifact classifiers
- User annotations
- Long-term trend analytics

