A feature extractor for lifelong learning from the event-based data. See https://github.com/VadymV/events_lifelong_learning
2021 Vadym Gryshchuk (vadym.gryshchuk@protonmail.com)
See the License
We study lifelong learning from data generated by event cameras. Contemporary methods for incremental learning are predominantly based on the frame-based data recorded by conventional shutter cameras. An event camera delivers high dynamic range, low power consumption, and high temporal resolution, thus making it suitable for dynamic environments, in which knowledge must be accumulated incrementally. We propose an architecture for lifelong learning that is composed of two modules: a feature extractor and an incremental learner. The feature extractor is utilized as a self-supervised sparse convolutional neural network, which processes event-based data. The incremental learner uses a habituation-based method that works in tandem with other existing techniques to mitigate catastrophic forgetting. The conducted experiments show that our proposed method is capable of effective incremental learning without forgetting previously learned experiences.
We test the following methods that can be used for a feature extraction module:
- Phased LSTM trained in a supervised way
- SCNN trained in a supervised way
- SCNN trained in a sef-supervised way
However, we utilize only SCNN trained in a sef-supervised way as feature extractor for the lifelong learning architecture.
The following figure illustrates the architecture.
This is the implementation for the bottom module of the architecture for Lifelong Learning.
conda env create --file env.yml- Clone https://github.com/facebookresearch/SparseConvNet and run
pip install SparseConvNet/
- N-Caltech101 can be downloaded here: http://rpg.ifi.uzh.ch/datasets/gehrig_et_al_iccv19/N-Caltech101.zip. Extract files into the folder
data. The following folder structure is expecteddata->N-Caltech101-> [training,testing,validation] - N-Caltech256 can be downloaded here: https://dgyblog.com/projects-term/dvs-dataset.html. We need only the file
INI_Caltech256_10fps_20160424.hdf5. Place this file into the folderdata. Runcreate_ncaltech_dataset.py - N-MNIST can be downloaded here: https://www.garrickorchard.com/datasets. Extract files into the folder
data. The following folder structure is expecteddata->N-MNIST-> [training,testing], therefore rename folders - For the N-Caltech256-12 follow the steps described in 2., but set the variable
DATASETincreate_ncaltech_dataset.pytoN-Caltech12
Look at first at settings.yaml to understand the parameters.
- In
settings.yamlset the methodnametosimclrand the methodbackbonetoscnn. Adjust other values to your needs. - Train a feature extractor
python train_feature_extractor.py. A model will be saved to the foldercheckpoint - Train a linear evaluator. In
settings.yamlset a path to the saved checkpoint and change the name of the method toclassifier. Runpython run_linear_evaluation_simclr.py
- In
settings.yamlset the methodnametoclassifierand the methodbackbonetoscnnorplstm. Ifplstmis selected, set `nr_events_window``to zero. Adjust other values to your needs. - Train a feature extractor
python train_feature_extractor.py. A model will be saved to the foldercheckpoint
- Extract features. In
settings.yamlset a path to the saved checkpoint. Runpython extract_features.py
The provided code uses the code from (see the header of each file for more information):
