Datasets:
Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +66 -0
- claris_curated_dataset.csv +3 -0
- dataset_infos.json +64 -0
- vocabulary.txt +60 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
claris_curated_dataset.csv filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
tags:
|
| 5 |
+
- sign language recognition
|
| 6 |
+
- emergency response
|
| 7 |
+
- computer vision
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# CLARIS - Critical Emergency Sign Language Dataset
|
| 11 |
+
|
| 12 |
+
This dataset is a curated subset of the "Google - Isolated Sign Language Recognition" dataset, specifically filtered for the **CLARIS (Clear and Live Automated Response for Inclusive Safety)** project.
|
| 13 |
+
|
| 14 |
+
## Dataset Description
|
| 15 |
+
|
| 16 |
+
The primary goal of the CLARIS project is to develop a mobile application that provides a lifeline for the Deaf community during emergencies. This dataset was created to train a proof-of-concept AI model capable of recognizing a vocabulary of critical emergency-related signs.
|
| 17 |
+
|
| 18 |
+
The data consists of pre-extracted landmark coordinates from video clips of isolated signs. It originates from the [Google - Isolated Sign Language Recognition Kaggle Competition](https://www.kaggle.com/competitions/asl-signs).
|
| 19 |
+
|
| 20 |
+
## Dataset Structure
|
| 21 |
+
|
| 22 |
+
The dataset is provided in both CSV and Parquet (coming soon) formats. Each row represents the coordinates of a single landmark in a single frame of a video sequence.
|
| 23 |
+
|
| 24 |
+
| Column | Dtype | Description |
|
| 25 |
+
| ---------------- | ------- | --------------------------------------------------------------------------- |
|
| 26 |
+
| `frame` | int16 | The frame number within the sequence. |
|
| 27 |
+
| `row_id` | object | A unique identifier for the landmark within the frame. |
|
| 28 |
+
| `type` | object | The type of landmark (`face`, `left_hand`, `right_hand`, `pose`). |
|
| 29 |
+
| `landmark_index` | int16 | The index of the landmark within its type. |
|
| 30 |
+
| `x` | float64 | The normalized x-coordinate of the landmark. |
|
| 31 |
+
| `y` | float64 | The normalized y-coordinate of the landmark. |
|
| 32 |
+
| `z` | float64 | The normalized z-coordinate of the landmark (depth). |
|
| 33 |
+
| `path` | object | The path to the original source parquet file for the sequence. |
|
| 34 |
+
| `participant_id` | int64 | A unique identifier for the participant (signer). |
|
| 35 |
+
| `sequence_id` | int64 | A unique identifier for the sign sequence. |
|
| 36 |
+
| `sign` | object | The ground truth label for the sign being performed. |
|
| 37 |
+
|
| 38 |
+
## Curation Process
|
| 39 |
+
|
| 40 |
+
To create a focused dataset for our specific use case, we performed a two-step curation process:
|
| 41 |
+
|
| 42 |
+
1. **Vocabulary Filtering:** We selected **62 signs** deemed most relevant for describing medical, fire, or intruder emergencies.
|
| 43 |
+
2. **Participant Filtering:** To create a manageable dataset for rapid prototyping, we constrained the data to sequences from **two distinct participants** who had a balanced distribution of the target signs.
|
| 44 |
+
|
| 45 |
+
This process resulted in a final dataset containing **1,719 unique sign sequences**, comprising over 37 million landmark rows.
|
| 46 |
+
|
| 47 |
+
## Usage
|
| 48 |
+
|
| 49 |
+
We recommend using the Parquet file for faster loading times.
|
| 50 |
+
|
| 51 |
+
```python
|
| 52 |
+
import pandas as pd
|
| 53 |
+
|
| 54 |
+
# Load the full curated dataset
|
| 55 |
+
df = pd.read_parquet('claris_curated_dataset.parquet')
|
| 56 |
+
|
| 57 |
+
# Or load the smaller, subsampled version
|
| 58 |
+
df_sample = pd.read_parquet('claris_subsample_dataset.parquet')
|
| 59 |
+
|
| 60 |
+
print(df.head())
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Link to Project Notebook
|
| 64 |
+
|
| 65 |
+
The complete methodology, including data preprocessing, model training, and analysis, can be found in our Kaggle notebook:
|
| 66 |
+
https://www.kaggle.com/code/eveelyn/datathon2025-dem
|
claris_curated_dataset.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:46d6eb5d6732829787388e6c076bd6d43856a75a2ac3416671b5fb939da81f4e
|
| 3 |
+
size 5502322614
|
dataset_infos.json
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"claris_emergency_signs": {
|
| 3 |
+
"description": "A curated subset of the Google Isolated Sign Language Recognition dataset, filtered to 62 critical emergency-related signs from two participants.",
|
| 4 |
+
"citation": "@misc{claris_dataset_2024, author={Dama D. Daliman, Evelyn Yosiana, Micky Valentino}, title={CLARIS - Critical Emergency Sign Language Dataset}, year={2024}, publisher={Hugging Face}, url={[Link to your Hugging Face Repo]}}",
|
| 5 |
+
"homepage": "[Link to your Hugging Face Repo]",
|
| 6 |
+
"license": "mit",
|
| 7 |
+
"features": {
|
| 8 |
+
"frame": {
|
| 9 |
+
"dtype": "int16",
|
| 10 |
+
"_type": "Value"
|
| 11 |
+
},
|
| 12 |
+
"row_id": {
|
| 13 |
+
"dtype": "string",
|
| 14 |
+
"_type": "Value"
|
| 15 |
+
},
|
| 16 |
+
"type": {
|
| 17 |
+
"dtype": "string",
|
| 18 |
+
"_type": "Value"
|
| 19 |
+
},
|
| 20 |
+
"landmark_index": {
|
| 21 |
+
"dtype": "int16",
|
| 22 |
+
"_type": "Value"
|
| 23 |
+
},
|
| 24 |
+
"x": {
|
| 25 |
+
"dtype": "float64",
|
| 26 |
+
"_type": "Value"
|
| 27 |
+
},
|
| 28 |
+
"y": {
|
| 29 |
+
"dtype": "float64",
|
| 30 |
+
"_type": "Value"
|
| 31 |
+
},
|
| 32 |
+
"z": {
|
| 33 |
+
"dtype": "float64",
|
| 34 |
+
"_type": "Value"
|
| 35 |
+
},
|
| 36 |
+
"path": {
|
| 37 |
+
"dtype": "string",
|
| 38 |
+
"_type": "Value"
|
| 39 |
+
},
|
| 40 |
+
"participant_id": {
|
| 41 |
+
"dtype": "int64",
|
| 42 |
+
"_type": "Value"
|
| 43 |
+
},
|
| 44 |
+
"sequence_id": {
|
| 45 |
+
"dtype": "int64",
|
| 46 |
+
"_type": "Value"
|
| 47 |
+
},
|
| 48 |
+
"sign": {
|
| 49 |
+
"dtype": "string",
|
| 50 |
+
"_type": "Value"
|
| 51 |
+
}
|
| 52 |
+
},
|
| 53 |
+
"splits": {
|
| 54 |
+
"train": {
|
| 55 |
+
"name": "train",
|
| 56 |
+
"num_bytes": 5120000000,
|
| 57 |
+
"num_examples": 1719,
|
| 58 |
+
"dataset_name": "claris_emergency_signs"
|
| 59 |
+
}
|
| 60 |
+
},
|
| 61 |
+
"download_size": 5120000000,
|
| 62 |
+
"dataset_size": 5120000000
|
| 63 |
+
}
|
| 64 |
+
}
|
vocabulary.txt
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
bad
|
| 2 |
+
can
|
| 3 |
+
close
|
| 4 |
+
cry
|
| 5 |
+
cut
|
| 6 |
+
down
|
| 7 |
+
drop
|
| 8 |
+
fall
|
| 9 |
+
fast
|
| 10 |
+
find
|
| 11 |
+
give
|
| 12 |
+
go
|
| 13 |
+
have
|
| 14 |
+
haveto
|
| 15 |
+
hear
|
| 16 |
+
hide
|
| 17 |
+
high
|
| 18 |
+
hot
|
| 19 |
+
jump
|
| 20 |
+
look
|
| 21 |
+
loud
|
| 22 |
+
mad
|
| 23 |
+
no
|
| 24 |
+
not
|
| 25 |
+
now
|
| 26 |
+
open
|
| 27 |
+
owie
|
| 28 |
+
quiet
|
| 29 |
+
see
|
| 30 |
+
sick
|
| 31 |
+
stuck
|
| 32 |
+
talk
|
| 33 |
+
time
|
| 34 |
+
touch
|
| 35 |
+
up
|
| 36 |
+
wait
|
| 37 |
+
will
|
| 38 |
+
yes
|
| 39 |
+
arm
|
| 40 |
+
child
|
| 41 |
+
dad
|
| 42 |
+
eye
|
| 43 |
+
face
|
| 44 |
+
fireman
|
| 45 |
+
head
|
| 46 |
+
hesheit
|
| 47 |
+
man
|
| 48 |
+
mom
|
| 49 |
+
person
|
| 50 |
+
police
|
| 51 |
+
car
|
| 52 |
+
glasswindow
|
| 53 |
+
home
|
| 54 |
+
outside
|
| 55 |
+
room
|
| 56 |
+
stairs
|
| 57 |
+
water
|
| 58 |
+
where
|
| 59 |
+
who
|
| 60 |
+
why
|