Datasets:
Jester Event-Based Gesture Dataset (Converted Subset)
Description
This dataset is a custom event-based version of a subset of the original JESTER hand gesture dataset. The JESTER dataset is a large-scale, densely labeled video dataset depicting humans performing predefined hand gestures in front of webcams or laptop cameras. It was originally created by crowd workers and is widely used for training robust machine learning models for gesture recognition.
Conversion Process
The RGB video clips from the selected subset of the JESTER dataset were converted into neuromorphic event streams using the v2e (video-to-events) simulator. This process transforms traditional frame-based data into event-based representations that mimic the output of neuromorphic cameras like DAVIS or DVS sensors. The event data captures changes in pixel intensity over time and is more suitable for Spiking Neural Networks (SNNs) and low-power vision systems.
Motivation
Event-based data enables the development and evaluation of energy-efficient, high-speed gesture recognition models. By converting frame-based gesture videos into events, this dataset facilitates research on applying Spiking Neural Networks and neuromorphic computing techniques for real-time human-computer interaction.
- Downloads last month
- 4