nyalpatel's picture
Update README.md
323e601 verified
---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 1623641436
num_examples: 1000
- name: train.clean.360
num_bytes: 1572285643
num_examples: 1000
- name: train.other.500
num_bytes: 1502809029
num_examples: 1000
- name: validation.clean
num_bytes: 65591952
num_examples: 100
- name: validation.other
num_bytes: 76760504
num_examples: 100
- name: test.clean
num_bytes: 85852252
num_examples: 100
- name: test.other
num_bytes: 58550856
num_examples: 100
download_size: 1181170369
dataset_size: 4985491672
configs:
- config_name: default
data_files:
- split: train.clean.100
path: data/train.clean.100-*
- split: train.clean.360
path: data/train.clean.360-*
- split: train.other.500
path: data/train.other.500-*
- split: validation.clean
path: data/validation.clean-*
- split: validation.other
path: data/validation.other-*
- split: test.clean
path: data/test.clean-*
- split: test.other
path: data/test.other-*
---
# Condensed LibriSpeech ASR
This dataset is a condensed version of the [LibriSpeech ASR dataset](https://huggingface.co/datasets/openslr/librispeech_asr), created by subsampling approximately 10% of the original data from each split. It is intended for quick experimentation, prototyping, and debugging when working with Automatic Speech Recognition (ASR) tasks.
## Dataset Details
- **Original Dataset:** [LibriSpeech ASR](https://huggingface.co/datasets/openslr/librispeech_asr)
- **Condensation Ratio:** Approximately 10% of the full dataset
- **Splits Included:**
- `train.clean.100`
- `train.clean.360`
- `train.other.500`
- `validation.clean`
- `validation.other`
- `test.clean`
- `test.other`
For each split, a default number of examples was extracted:
- **Training Splits:** 1,000 examples each
- **Validation/Test Splits:** 100 examples each
## Data Format
Each sample in the dataset contains the following fields:
- **file:**
A path to the original audio file (FLAC format).
- **audio:**
A dictionary containing:
- `path`: Path to the audio file.
- `array`: The decoded audio waveform as a NumPy array.
- `sampling_rate`: The sampling rate (typically 16 kHz).
- **text:**
The transcription corresponding to the audio.
- **id:**
A unique identifier for the sample.
- **speaker_id:**
A unique identifier for the speaker.
- **chapter_id:**
An identifier corresponding to the audiobook chapter.
## How Was This Dataset Created?
The condensed dataset was generated by streaming the full [LibriSpeech ASR dataset](https://huggingface.co/datasets/openslr/librispeech_asr) using the Hugging Face Datasets library and selecting approximately 10% of each split. This process preserves the original structure and fields, enabling seamless use with models and workflows designed for LibriSpeech.
## Usage Example
Below is a Python snippet to load and inspect the dataset:
```python
from datasets import load_dataset
# Load the condensed dataset from the Hugging Face Hub
dataset = load_dataset("nyalpatel/condensed_librispeech_asr")
# Access a specific split (e.g., test.clean)
test_dataset = dataset["test.clean"]
# Display the first example in the test set
print(test_dataset[0])