Skip to content

ADAM-Lab-GW/events_feature_extractor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lifelong Learning from Event-based Data

A feature extractor for lifelong learning from the event-based data. See https://github.com/VadymV/events_lifelong_learning

2021 Vadym Gryshchuk (vadym.gryshchuk@protonmail.com)

See the License

We study lifelong learning from data generated by event cameras. Contemporary methods for incremental learning are predominantly based on the frame-based data recorded by conventional shutter cameras. An event camera delivers high dynamic range, low power consumption, and high temporal resolution, thus making it suitable for dynamic environments, in which knowledge must be accumulated incrementally. We propose an architecture for lifelong learning that is composed of two modules: a feature extractor and an incremental learner. The feature extractor is utilized as a self-supervised sparse convolutional neural network, which processes event-based data. The incremental learner uses a habituation-based method that works in tandem with other existing techniques to mitigate catastrophic forgetting. The conducted experiments show that our proposed method is capable of effective incremental learning without forgetting previously learned experiences.

We test the following methods that can be used for a feature extraction module:

  • Phased LSTM trained in a supervised way
  • SCNN trained in a supervised way
  • SCNN trained in a sef-supervised way

However, we utilize only SCNN trained in a sef-supervised way as feature extractor for the lifelong learning architecture.

The following figure illustrates the architecture.

Architecture

This is the implementation for the bottom module of the architecture for Lifelong Learning.

Installation

  1. conda env create --file env.yml
  2. Clone https://github.com/facebookresearch/SparseConvNet and run pip install SparseConvNet/

Instructions for getting datasets

  1. N-Caltech101 can be downloaded here: http://rpg.ifi.uzh.ch/datasets/gehrig_et_al_iccv19/N-Caltech101.zip. Extract files into the folder data. The following folder structure is expected data -> N-Caltech101 -> [training, testing, validation]
  2. N-Caltech256 can be downloaded here: https://dgyblog.com/projects-term/dvs-dataset.html. We need only the file INI_Caltech256_10fps_20160424.hdf5. Place this file into the folder data. Run create_ncaltech_dataset.py
  3. N-MNIST can be downloaded here: https://www.garrickorchard.com/datasets. Extract files into the folder data. The following folder structure is expected data -> N-MNIST -> [training, testing], therefore rename folders
  4. For the N-Caltech256-12 follow the steps described in 2., but set the variable DATASET in create_ncaltech_dataset.py to N-Caltech12

Look at first at settings.yaml to understand the parameters.

Instructions for Self-Supervised Learning

  1. Insettings.yaml set the method name to simclr and the method backbone to scnn. Adjust other values to your needs.
  2. Train a feature extractorpython train_feature_extractor.py. A model will be saved to the folder checkpoint
  3. Train a linear evaluator. In settings.yaml set a path to the saved checkpoint and change the name of the method to classifier. Run python run_linear_evaluation_simclr.py

Instructions for Supervised Learning

  1. Insettings.yaml set the method name to classifier and the method backbone to scnn or plstm. If plstm is selected, set `nr_events_window``to zero. Adjust other values to your needs.
  2. Train a feature extractorpython train_feature_extractor.py. A model will be saved to the folder checkpoint

Instructions for the extraction of features

  1. Extract features. In settings.yaml set a path to the saved checkpoint. Run python extract_features.py

The provided code uses the code from (see the header of each file for more information):

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 86.4%
  • Shell 12.6%
  • Jupyter Notebook 1.0%