You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By accessing this dataset, you are agreeing to the terms laid out in the license.
In particular, you are agreeing to:
- Only use the data for non-commercial research and educational purposes
- Not redistribute the data
- Not attempt to extract personally identifiable information from the data or otherwise misuse it
Log in or Sign Up to review the conditions and access this dataset content.
Overview
The MOSLA dataset ("MOSLA") is a longitudinal, multimodal, multilingual, and controlled dataset created by inviting participants to learn one of three target languages (Arabic, Spanish, and Chinese) from scratch over a span of two years, exclusively through online instruction, and recording every lesson using Zoom. The dataset is semi-automatically annotated with speaker/language IDs and transcripts by both human annotators and fine-tuned state-of-the-art speech models. Because learners had no previous exposure to the language they are learning in MOSLA and their only exposure to the language over this time period was these online language lessons, MOSLA provides a complete picture of the first two years of language acquisition for its participants.
Concretely, MOSLA includes three types of data:
- Video recordings of all of the language lessons
- Automatically generated speech annotations for all of the language lessons
- Human speech annotation for some five-minute segments in some of the language lessons
For detailed information please see the paper, or for study background and ethical considerations the datasheet.
The raw data can also be cloned using git and git-lfs. If you are using Python we recommend using the 🤗 Datasets library, but we understand that not everyone uses Python.
Data Volume
There are approximately 270 hours of video data, 202,000 machine generated annotation utterances, and 7,700 human annotated utterances. The entirety of the video data takes up about 71GB.
Dataset Structure
Configurations / Data Groupings
MOSLA has four configurations, three of which are different mixes of human and/or machine annotations, and one of which is just a record of what data was presented to humans for annotation. Concretely, these can be seen below:
from datasets import load_dataset
# the "merged" data is the default configuration. it contains machine annotations for all sections of video EXCEPT
# those annotated by humans, and human annotations for the sections of video annotated by humans
# merged.jsonl
merged = load_dataset('octanove/mosla')
# the human annotation data, containing only the annotations made by bilingual human annotators
# human.jsonl
human = load_dataset('octanove/mosla', 'human')
# the machine annotation data, containing machine annotations for ALL sections of video and not human annotations
# machine.jsonl
machine = load_dataset('octanove/mosla', 'machine')
# annotation record data, where each instance represents a five-minute chunk that was shown to humans for annotation
# this data is useful to understand which sections of the videos were shown to human annotators, but not much else
# annotation_records.jsonl
annotation_records = load_dataset('octanove/mosla', 'annotation_records')
Instance Schema
Instances look like this:
{
"start":1748.4499511719, # beginning of the annotated utterance in the video, in seconds
"end":1758.5999755859, # end of the annotated utterance in the video, in seconds
"video":"20210918.mp4", # name of the annotated video file
"date":1631923200000, # date of the lesson
"lesson_language":"ara", # language being taught in the lesson
"speaker":"s", #the speaker of the utterance
"language":"eng", # the primary language of the utterance
"text":"Oh okay, oh, when you see يلعبوا first, then you don't have to change it?", # utterance content
"memo":"", # annotator memo, if any was provided
"human_annotation":true # true if annotation was generated by a human annotator
}
Note that annotation records are simpler, and have only the following fields: ("start", "end", "lesson_language", "video")
.
Video Files
The 🤗 dataset only includes the annotations by default, but video files are present in the git repository and can be downloaded like this:
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="octanove/mosla", repo_type="dataset", filename="video/zho/20210205.mp4")
# or from a specific example
def download_video(ex: dict):
hf_hub_download(repo_id="octanove/mosla", repo_type="dataset",
filename=f"video/{ex['lesson_language']}/{ex['video']}")
Test/Train Splits
Right now there are no officially defined test/train splits, but we may add these in the future.
Use Cases / Q&A
Below we address some common use cases / things that may be unclear just from looking at the data.
I just want to analyze text data - what should I use?
You should probably use the merged machine/human annotations in merged.jsonl
. This data was used for analysis in the paper,
and contains human data for sections that were human annotated and machine generated annotations for the rest of the data.
I want to train ASR models - what should I use?
You should probably use the human annotations in human.jsonl
, unless you want to train on the output of a machine learning model.
What are the annotation records?
The annotation records contain information about what spans of time were shown to humans for annotation. This can safely be ignored
for most use cases, but becomes important if you want to train or evaluate a system that includes speech detection or diarization,
because this information is necessary to know which periods of time humans chose not to annotate with any utterances. That is,
the contents of human.jsonl
contain only the utterances that humans chose to annotate, but you need annotation_records.jsonl
to know the sections of each video that were shown to humans. For example, with the below annotation record:
{"start":600.0,"end":900.0,"lesson_language":"ara","video":"20210828.mp4"}
From this information, we can say that any period of time between the 600th and 900th seconds in this video for which there is no human annotation has a gold label/human annotation indicating the absence of speech.
- Downloads last month
- 43