Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
rlangman commited on
Commit
bdd1ad1
·
verified ·
1 Parent(s): f6a8745

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -23
README.md CHANGED
@@ -27,7 +27,7 @@ padding: 0;
27
 
28
  ## Dataset Description
29
 
30
- This repository contains the metadata for HiFiTTS-2, a large scale speech dataset derived from LibriVox audiobooks. For more details, please refer to our paper. [TODO: Add ArXiv link]
31
 
32
  The dataset contains metadata for approximately 36.7k hours of audio from 5k speakers that can be downloaded from LibriVox at a 48 kHz sampling rate.
33
 
@@ -43,7 +43,8 @@ The dataset contains an utterance level manifest with these fields:
43
  - **speaker**: LibriVox speaker ID
44
  - **set**: Dataset partition, either "train", "test_seen", "dev_seen", "test_unseen", or "dev_unseen"
45
  - **duration**: Duration of utterance
46
- - **bandwidth**: Estimated bandwidth of audiobook chapter containing this utterance.
 
47
  - **wer**: ASR word error rate of *normalized_text*
48
  - **cer**: ASR character error rate of *normalized_text*
49
  - **text_source**: Data source text was taken from, either 'book' or 'mls'
@@ -76,7 +77,9 @@ python /home/NeMo-speech-data-processor/main.py \
76
 
77
  *max_workers* is the number of threads to use for downloading the data. To download the 44khz dataset, specify *config_44khz.yaml*.
78
 
79
- By default, the script will download audio files into the workspace directory under *{workspace_dir}/audio_22khz*. The download will ignore HTTP errors and store information for any failed downloads into *{workspace_dir}/errors_22khz.json*. A new manifest will be created at *{worksapce_dir}/manifest_filtered_22khz.json* with utterances from failed audiobooks removed. You can override the default behavior by modifying the *config.yaml* file in your local SDP repository [TODO: Add link to config fil;e].
 
 
80
 
81
  If you want to retry the download for failed audiobooks, rerun the script with the output *errors_22khz.json* file.
82
 
@@ -95,7 +98,7 @@ NVIDIA Corporation
95
 
96
  ## Dataset Creation Date
97
 
98
- May 2025
99
 
100
  ## License/Terms of Use
101
 
@@ -104,22 +107,4 @@ GOVERNING TERMS: This dataset is licensed under the Creative Commons Attribution
104
  ## Ethical Considerations:
105
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
106
 
107
- Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
108
-
109
- ## Citation
110
-
111
- If you find this dataset useful, please cite:
112
-
113
- TODO: Update with correct arxiv link
114
-
115
- ```
116
- @article{rlangman2025hifitts2,
117
- title={HiFiTTS-2: A Large-Scale High Bandwidth Speech Dataset},
118
- author={Ryan Langman and Xuesong Yang and Paarth Neekhara and Shehzeen Hussain and Edresson Casanova and Evelina Bakhturina and Jason Li1},
119
- year={2025},
120
- eprint={2504.04030},
121
- archivePrefix={arXiv},
122
- primaryClass={cs.CL},
123
- url={https://arxiv.org/abs/2504.04030},
124
- }
125
- ```
 
27
 
28
  ## Dataset Description
29
 
30
+ This repository contains the metadata for HiFiTTS-2, a large scale speech dataset derived from LibriVox audiobooks. For more details, please refer to our paper.
31
 
32
  The dataset contains metadata for approximately 36.7k hours of audio from 5k speakers that can be downloaded from LibriVox at a 48 kHz sampling rate.
33
 
 
43
  - **speaker**: LibriVox speaker ID
44
  - **set**: Dataset partition, either "train", "test_seen", "dev_seen", "test_unseen", or "dev_unseen"
45
  - **duration**: Duration of utterance
46
+ - **bandwidth**: Estimated bandwidth of audiobook chapter containing this utterance
47
+ - **speaker_count**: Number of speakers detected in this utterance
48
  - **wer**: ASR word error rate of *normalized_text*
49
  - **cer**: ASR character error rate of *normalized_text*
50
  - **text_source**: Data source text was taken from, either 'book' or 'mls'
 
77
 
78
  *max_workers* is the number of threads to use for downloading the data. To download the 44khz dataset, specify *config_44khz.yaml*.
79
 
80
+ Downloading the 22 kHz version will require approximately 3TB of disk space. Downloading the 44 kHz version will require approximately 5TB of disk space.
81
+
82
+ By default, the script will download audio files into the workspace directory under *{workspace_dir}/audio_22khz*. The download will ignore HTTP errors and store information for any failed downloads into *{workspace_dir}/errors_22khz.json*. A new manifest will be created at *{worksapce_dir}/manifest_filtered_22khz.json* with utterances from failed audiobooks removed. You can override the default behavior by modifying the [config.yaml file](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/english/hifitts2/config_22khz.yaml) in your local SDP repository.
83
 
84
  If you want to retry the download for failed audiobooks, rerun the script with the output *errors_22khz.json* file.
85
 
 
98
 
99
  ## Dataset Creation Date
100
 
101
+ June 2025
102
 
103
  ## License/Terms of Use
104
 
 
107
  ## Ethical Considerations:
108
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
109
 
110
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).