---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
languages:
- en
licenses:
- other-charades
multilinguality:
- monolingual
paperswithcode_id: something-something
pretty_name: Something Something v2
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- other
task_ids:
- other
---

# Dataset Card for Something Something v2

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://prior.allenai.org/projects/charades
- **Repository:** https://github.com/gsig/charades-algorithms
- **Paper:** https://arxiv.org/abs/1604.01753
- **Leaderboard:** https://paperswithcode.com/sota/action-classification-on-charades
- **Point of Contact:** mailto: vision.amt@allenai.org 

### Dataset Summary

The Something-Something dataset (version 2) is a collection of 220,847 labeled video clips of humans performing pre-defined, basic actions with everyday objects. It is designed to train machine learning models in fine-grained understanding of human hand gestures like putting something into something, turning something upside down and covering something with something.

### Supported Tasks and Leaderboards

- `action-classification`: The goal of this task is to classify actions happening in a video. This is a multilabel classification. The leaderboard is available [here](https://paperswithcode.com/sota/action-classification-on-charades)


### Languages

The annotations in the dataset are in English.

## Dataset Structure

### Data Instances

```
{
  "video_id": "46GP8",
  "video": "/home/amanpreet_huggingface_co/.cache/huggingface/datasets/downloads/extracted/3f022da5305aaa189f09476dbf7d5e02f6fe12766b927c076707360d00deb44d/46GP8.mp4",
  "subject": "HR43",
  "scene": "Kitchen",
  "quality": 6,
  "relevance": 7,
  "verified": "Yes",
  "script": "A person cooking on a stove while watching something out a window.",
  "objects": ["food", "stove", "window"],
  "descriptions": [
    "A person cooks food on a stove before looking out of a window."
  ],
  "labels": [92, 147],
  "action_timings": [
    [11.899999618530273, 21.200000762939453],
    [0.0, 12.600000381469727]
  ],
  "length": 24.829999923706055
}
```

### Data Fields

- `video_id`: `str` Unique identifier for each video.
- `video`: `str` Path to the video file
- `subject`: `str` Unique identifier for each subject in the dataset
- `scene`: `str` One of 15 indoor scenes in the dataset, such as Kitchen
- `quality`: `int` The quality of the video judged by an annotator (7-point scale, 7=high quality), -100 if missing
- `relevance`: `int` The relevance of the video to the script judged by an annotated (7-point scale, 7=very relevant), -100 if missing
- `verified`: `str` 'Yes' if an annotator successfully verified that the video matches the script, else 'No'
- `script`: `str` The human-generated script used to generate the video
- `descriptions`: `List[str]` List of descriptions by annotators watching the video
- `labels`: `List[int]` Multi-label actions found in the video. Indices from 0 to 156.
- `action_timings`: `List[Tuple[int, int]]` Timing where each of the above actions happened.
- `length`: `float` The length of the video in seconds

<details>
  <summary>
  Click here to see the full list of ImageNet class labels mapping:
  </summary>
  
</details>

### Data Splits


|             |train  |validation| test  |
|-------------|------:|---------:|------:|
|# of examples|1281167|50000     |100000 |


## Dataset Creation

### Curation Rationale



### Source Data

#### Initial Data Collection and Normalization


#### Who are the source language producers?


### Annotations

#### Annotation process


#### Who are the annotators?


### Personal and Sensitive Information


## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators


### Licensing Information

### Citation Information

```bibtex
@inproceedings{goyal2017something,
  title={The" something something" video database for learning and evaluating visual common sense},
  author={Goyal, Raghav and Ebrahimi Kahou, Samira and Michalski, Vincent and Materzynska, Joanna and Westphal, Susanne and Kim, Heuna and Haenel, Valentin and Fruend, Ingo and Yianilos, Peter and Mueller-Freitag, Moritz and others},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={5842--5850},
  year={2017}
}
```

### Contributions

Thanks to [@apsdehal](https://github.com/apsdehal) for adding this dataset.