snntorch.spikevision.spikedata

All datasets are subclasses of torch.utils.data.Dataset i.e., they have __getitem__ and __len__ methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader which can load multiple samples in parallel using torch.multiprocessing workers. For example:

nmnist_data = spikevision.data.NMNIST('path/to/nmnist_root/')
data_loader = DataLoader(nmnist_data,
                         batch_size=4,
                         shuffle=True,
                         num_workers=args.nThreads)

For further examples on each dataset and its use, please refer to the examples.

NMNIST

class snntorch.spikevision.spikedata.nmnist.NMNIST(root, train=True, transform=None, target_transform=None, download_and_create=True, num_steps=300, dt=1000)[source]

NMNIST Dataset.

The Neuromorphic-MNIST (NMNIST) dataset is a spiking version of the original frame-based MNIST dataset.

The downloaded and extracted dataset consists of the same 60000 training and 10000 testing samples as the MNIST dataset, and is captured at the same visual scale as the original MNIST dataset (28x28 pixels). For compatibility with the .hdf5 conversion process, this is reduced such that the number of samples for each class are balanced to the label with the minimum number of samples (training: 5421, test: 892).

Number of classes: 10

Number of train samples: 54210

Number of test samples: 8920

Dimensions: [num_steps x 2 x 32 x 32]

  • num_steps: time-dimension of event-based footage

  • 2: number of channels (on-spikes for luminance increasing;

off-spikes for luminance decreasing) * 32x32: W x H spatial dimensions of event-based footage

For further reading, see:

Orchard, G.; Cohen, G.; Jayawant, A.; and Thakor, N. “Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades”, Frontiers in Neuroscience, vol.9, no.437, Oct. 2015.

Example:

from snntorch.spikevision import spikedata

train_ds = spikedata.NMNIST("data/nmnist", train=True,
num_steps=300, dt=1000)
test_ds = spikedata.NMNIST("data/nmnist", train=False,
num_steps=300, dt=1000)

# by default, each time step is integrated over 1ms, or
dt=1000 microseconds
# dt can be changed to integrate events over a varying number
of time steps
# Note that num_steps should be scaled inversely by the same factor

train_ds = spikedata.NMNIST("data/nmnist", train=True,
num_steps=150, dt=2000)
test_ds = spikedata.NMNIST("data/nmnist", train=False,
num_steps=150, dt=2000)

The dataset can also be manually downloaded, extracted and placed into root which will allow the dataloader to bypass straight to the generation of a hdf5 file.

Direct Download Links:

Parameters:
  • root (string) – Root directory of dataset where Train.zip and Test.zip exist.

  • train (bool, optional) – If True, creates dataset from Train.zip, otherwise from Test.zip

  • transform (callable, optional) – A function/transform that takes in a PIL image and returns a transforms version. By default, a pre-defined set of transforms are applied to all samples to convert them into a time-first tensor with correct orientation.

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • download_and_create (bool, optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • num_steps (int, optional) – Number of time steps, defaults to 300

  • dt (int, optional) – Number of time stamps integrated in microseconds, defaults to 1000

Dataloader adapted from torchneuromorphic originally by Emre Neftci and Clemens Schaefer.

The dataset is released under the Creative Commons Attribution-ShareAlike 4.0 license. All rights remain with the original authors.

DVSGesture

class snntorch.spikevision.spikedata.dvs_gesture.DVSGesture(root, train=True, transform=None, target_transform=None, download_and_create=True, num_steps=None, dt=1000, ds=None, return_meta=False, time_shuffle=False)[source]

DVS Gesture Dataset.

The data was recorded using a DVS128. The dataset contains 11 hand gestures from 29 subjects under 3 illumination conditions.

Number of classes: 11

Number of train samples: 1176

Number of test samples: 288

Dimensions: [num_steps x 2 x 128 x 128]

  • num_steps: time-dimension of event-based footage

  • 2: number of channels (on-spikes for luminance increasing;

off-spikes for luminance decreasing) * 128x128: W x H spatial dimensions of event-based footage

For further reading, see:

A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza, J. Kusnitz, M. Debole, S. Esser, T. Delbruck, M. Flickner, and D. Modha, “A Low Power, Fully Event-Based Gesture Recognition System,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017.

Example:

from snntorch.spikevision import spikedata

train_ds = spikedata.DVSGesture("data/dvsgesture", train=True,
num_steps=500, dt=1000)
test_ds = spikedata.DVSGesture("data/dvsgesture", train=False,
num_steps=1800, dt=1000)

# by default, each time step is integrated over 1ms, or
dt=1000 microseconds
# dt can be changed to integrate events over a varying number
of time steps
# Note that num_steps should be scaled inversely by the same factor

train_ds = spikedata.DVSGesture("data/dvsgesture", train=True,
num_steps=250, dt=2000)
test_ds = spikedata.DVSGesture("data/dvsgesture", train=False,
num_steps=900, dt=2000)

The dataset can also be manually downloaded, extracted and placed into root which will allow the dataloader to bypass straight to the generation of a hdf5 file.

Direct Download Links:

Parameters:
  • root (string) – Root directory of dataset.

  • train (bool, optional) – If True, creates dataset from training set of dvsgesture, otherwise test set.

  • transform (callable, optional) – A function/transform that takes in a PIL image and returns a transforms version. By default, a pre-defined set of transforms are applied to all samples to convert them into a time-first tensor with correct orientation.

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • download_and_create (bool, optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • num_steps (int, optional) – Number of time steps, defaults to 500 for train set, or 1800 for test set

  • dt (int, optional) – The number of time stamps integrated in microseconds, defaults to 1000

  • ds (int, optional) – Rescaling factor, defaults to 1.

Return_meta:

Option to return metadata, defaults to False

Time_shuffle:

Option to randomize start time of dataset, defaults to False

Dataloader adapted from torchneuromorphic originally by Emre Neftci and Clemens Schaefer.

The dataset is released under a Creative Commons Attribution 4.0 license. All rights remain with the original authors.

SHD

class snntorch.spikevision.spikedata.shd.SHD(root, train=True, transform=None, target_transform=None, download_and_create=True, num_steps=1000, ds=1, dt=1000)[source]

Spiking Heidelberg Digits Dataset.

Spikes in 700 input channels were generated using an artificial cochlea model listening to studio recordings of spoken digits from 0 to 9 in both German and English languages.

Number of classes: 20

Number of train samples: 8156

Number of test samples: 2264

Dimensions: [num_steps x 700]

  • num_steps: time-dimension of audio channels

  • 700: number of channels in cochlea model

For further reading, see:

Cramer, B., Stradmann, Y., Schemmel, J., and Zenke, F. (2020). The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 1–14.

Example:

from snntorch.spikevision import spikedata

train_ds = spikedata.SHD("data/shd", train=True)
test_ds = spikedata.SHD("data/shd", train=False)

# by default, each time step is integrated over 1ms, or dt=1000
microseconds
# dt can be changed to integrate events over a varying number of
time steps
# Note that num_steps should be scaled inversely by the same factor

train_ds = spikedata.SHD(
"data/shd", train=True, num_steps=500, dt=2000)
test_ds = spikedata.SHD(
"data/shd", train=False, num_steps=500, dt=2000)

The dataset can also be manually downloaded, extracted and placed into root which will allow the dataloader to bypass straight to the generation of a hdf5 file.

Direct Download Links:

Parameters:
  • root (string) – Root directory of dataset.

  • train (bool, optional) – If True, creates dataset from training set of dvsgesture, otherwise test set.

  • transform (callable, optional) – A function/transform that takes in a PIL image and returns a transforms version. By default, a pre-defined set of transforms are applied to all samples to convert them into a time-first tensor with correct orientation.

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • download_and_create (bool, optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • num_steps (int, optional) – Number of time steps, defaults to 1000

  • dt (int, optional) – The number of time stamps integrated in microseconds, defaults to 1000

  • ds (int, optional) – Rescaling factor, defaults to 1.

Dataloader adapted from torchneuromorphic originally by Emre Neftci.

The dataset is released under a Creative Commons Attribution 4.0 International License. All rights remain with the original authors.