You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
To get access to this dataset, make a request here and then ping pyannoteAI members with your HF username on JSALT-2025-EMMA Discord channel.
Log in or Sign Up to review the conditions and access this dataset content.
pyannoteAI-EMMA-20k dataset
In the framework of the EMMA project, part of JSALT 2025, the pyannoteAI research lab ran its internal speaker diarization pipeline on the whole YODAS2 dataset.
From the output, we then selected a subset of the predictions for a total amount of around 20k hours of audio.
Licence
This dataset is licensed under CC BY-NC-SA 4.0 and therefore does not allow commercial use (e.g. training a commercial model using this dataset).
Dataset Summary
- Languages: 21 non-English languages + English
- Audio duration per non-English language: ~500 hours
- 250h single-speaker recordings (104,399 segments of ~2 minutes)
- 250h multi-speaker recordings (28,638 segments of ~5, ~10, and ~20 minutes)
- Audio duration for English: ~10,000 hours
- 5,000h single-speaker recordings (114,248 segments of ~2 minutes)
- 5,000h multi-speaker recordings (25,574 segments of ~5, ~10, and ~20 minutes)
Each file contains at least 60% speech, with an average speech ratio of 83% (based on automatic speech activity detection).
Possible Use Cases
- Single-speaker recordings (9,925 hours): Suitable for training models for speech separation
- Multi-speaker recordings (9,557 hours): Suitable for training and evaluating speaker diarization systems
Duration per Language
Language | Mono Audio | Mono Speech | Multi Audio | Multi Speech |
---|---|---|---|---|
ar | 250:07:37 | 212:05:11 | 250:00:02 | 211:19:13 |
bn | 250:06:14 | 204:59:28 | 134:59:44 | 108:13:07 |
cs | 54:04:42 | 42:32:20 | 56:05:16 | 45:21:48 |
de | 250:02:43 | 201:39:38 | 250:10:45 | 206:05:12 |
es | 250:02:14 | 199:03:54 | 250:10:19 | 217:20:15 |
fr | 250:01:24 | 203:50:56 | 250:02:09 | 211:04:01 |
hi | 250:04:01 | 206:00:03 | 250:00:43 | 203:00:12 |
id | 250:01:40 | 196:16:11 | 250:01:13 | 198:59:45 |
it | 250:00:40 | 196:14:47 | 250:02:52 | 213:47:34 |
ja | 250:05:07 | 202:37:27 | 250:08:25 | 199:22:37 |
ko | 250:02:23 | 198:01:52 | 250:14:39 | 206:12:25 |
nl | 250:00:13 | 200:40:30 | 250:03:34 | 210:30:14 |
pl | 250:00:00 | 201:10:04 | 241:25:13 | 193:20:55 |
pt | 250:00:24 | 209:12:37 | 250:05:22 | 211:19:42 |
ru | 250:00:16 | 203:33:15 | 250:19:25 | 207:48:13 |
ta | 186:10:18 | 153:57:46 | 70:06:59 | 58:15:12 |
th | 250:06:50 | 197:31:27 | 242:48:46 | 187:54:01 |
tr | 250:02:25 | 203:31:18 | 250:01:18 | 212:31:03 |
uk | 250:34:02 | 210:50:50 | 250:00:09 | 224:06:33 |
ur | 183:17:17 | 152:00:57 | 59:29:22 | 51:35:26 |
vi | 250:02:57 | 200:32:31 | 250:05:18 | 199:43:23 |
en | 5000:00:29 | 4229:42:11 | 5000:16:06 | 4371:43:27 |
Output Files
We provide RTTM annotations for both types of recordings:
single.rttm
: for single-speaker segmentsmulti.rttm
: for multi-speaker segmentsmix.rttm
: merged annotations combining both single- and multi-speaker recordings
Additionally, we provide a database.yml
file that can be used directly with pyannote.audio to fine-tune your own diarization models with ease.
- Downloads last month
- 37