MetaMan helps you manage neuroscience project metadata and keep raw/processed data in a clean hierarchy:
data_root/
raw/
<project>/<experiment>/<subject>/<session>/...
processed/
<project>/<experiment>/<subject>/<session>/...
It is built for fast navigation, safe copy workflows, metadata consistency, and reproducible structure.
pip install -r requirements.txtpython run_app.py- Browse project -> experiment -> subject -> session
- View and edit metadata at all hierarchy levels
- Load subject metadata from CSV for one or multiple subjects
- Copy/open paths quickly
- Create and update recording/session metadata
- Navigate existing project hierarchy via dropdowns
- Update file list and metadata triplet outputs (
json/csv/h5)
- Track preprocessing steps and completion status
- Store step parameters and comments
- Import parameters from CSV/JSON
- Attach per-step results folders
- Load metadata plans (
csv/tsv/xlsx) - Map columns (
subject_id,session_id,trial_id, custom fields) - Match files using deterministic keys
- Scan multiple raw and processed source roots
- Dry run by default (safe mode)
- Execute copy with overwrite policy controls
- Generate match report, run log, and session/subject metadata outputs
- Manual backup to Server, External HDD, or Both
- Scheduled daily backups per project
- Optional experiment-level backup selection
- Last-used backup roots and schedules persisted
MetaMan writes:
experiment_plan_normalized.csvmatch_report.csvrun_log.txtsubject_metadata.csvandsubject_metadata.h5session_metadata.csvandsession_metadata.h5
Output roots follow:
target_raw_root/<project>/<experiment>/...target_processed_root/<project>/<experiment>/...
- Safe by default: dry run + no blind overwrite
- Transparent operations: logs, preview tables, match reports
- Responsive UI: background worker threads for long operations
- Deterministic matching: reproducible plan-to-file mapping
MetaMan/
main.py
config.py
state.py
io_ops.py
tabs/
navigation_tab.py
recording_tab.py
preprocessing_tab.py
data_reorganizer_tab.py
services/
data_reorganizer.py
file_scanner.py
server_sync.py
search_service.py
- Keep one canonical
data_rootwithraw/andprocessed/subfolders. - Use Data reorganizer in dry run first, then execute.
- Save/load reorganizer configs for recurring pipelines.
- Prefer explicit
session_idin plans when possible.
Install dependencies:
pip install -r requirements.txtMetaMan now falls back to a safe local root (~/MetaManData).
You can also set your preferred root from File -> Set Data Root....
MetaMan turns scattered files and ad-hoc metadata into a structure you can trust.