Datasets:

Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
ssm-treble commited on
Commit
90b2d94
·
verified ·
1 Parent(s): 7af4149

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md CHANGED
@@ -63,3 +63,155 @@ configs:
63
  - split: speech_hoa8
64
  path: data/speech_hoa8-*
65
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  - split: speech_hoa8
64
  path: data/speech_hoa8-*
65
  ---
66
+
67
+ # **Treble10-Speech**
68
+ The **Treble10-Speech** dataset is a dataset for automatic speech recognition (ASR), containing pre-convolved speech files using high fidelity room-acoustic simulations from 10 different furnished rooms: 2 bathrooms, 2 bedrooms, 2 living rooms with hallway, 2 living rooms without hallway, 2 meeting rooms.
69
+ The room volumes range between 14 and 46 m3, resulting in reverberation times between 0.17 and 0.84 s.
70
+
71
+ ## Examples: accessing a reverberant mono speech file:
72
+ ```python
73
+ from datasets import load_dataset, Audio
74
+ import matplotlib.pyplot as plt
75
+ import numpy as np
76
+ ds = load_dataset(
77
+ "treble-technologies/Treble10-Speech",
78
+ split="speech_mono",
79
+ streaming=True,
80
+ )
81
+ ds = ds.cast_column("audio", Audio())
82
+ # Read the samples from the TorchCodec decoder object:
83
+ rec = next(iter(ds))
84
+ samples = rec["audio"].get_all_samples()
85
+ speech_mono = samples.data
86
+ sr = samples.sample_rate
87
+ print(f"Mono speech has this shape: {speech_mono.shape}, and a sampling rate of {sr} Hz.")
88
+ # We can access and compare individual channels from the mono device like this
89
+ t_axis = np.arange(speech_mono.shape[1]) / sr
90
+ plt.figure()
91
+ plt.plot(t_axis, speech_mono.numpy().T, label="Mono speech")
92
+ plt.xlabel("Time (s)")
93
+ plt.ylabel("Amplitude")
94
+ plt.legend()
95
+ plt.show()
96
+ ```
97
+
98
+ ## Example: Accessing a reverberant speech file encoded to 8th-order Ambisonics
99
+ ```python
100
+ from datasets import load_dataset, Audio
101
+ ds = load_dataset(
102
+ "treble-technologies/Treble10-Speech",
103
+ split="speech_hoa8",
104
+ streaming=True,
105
+ )
106
+ # Keep native sampling rate (don’t ask TorchCodec to resample)
107
+ ds = ds.cast_column("audio", Audio())
108
+ rec = next(iter(ds))
109
+ # Read the samples from the TorchCodec decoder object:
110
+ samples = rec["audio"].get_all_samples()
111
+ speech = samples.data.cpu().numpy() # shape: (frames, channels), HOA preserved
112
+ sr = samples.sample_rate
113
+ ```
114
+
115
+ ## Example: Accessing a reverberant speech file at the microphones of a 6-channel device
116
+ ```python
117
+ from datasets import load_dataset, Audio
118
+ import matplotlib.pyplot as plt
119
+ import numpy as np
120
+ ds = load_dataset(
121
+ "treble-technologies/Treble10-Speech",
122
+ split="speech_6ch",
123
+ streaming=True,
124
+ )
125
+ ds = ds.cast_column("audio", Audio())
126
+ # Read the samples from the TorchCodec decoder object:
127
+ rec = next(iter(ds))
128
+ samples = rec["audio"].get_all_samples()
129
+ speech_6ch = samples.data
130
+ sr = samples.sample_rate
131
+ print(f"6 channel speech has this shape: {speech_6ch.shape}, and a sampling rate of {sr} Hz.")
132
+ # We can access and compare individual channels from the 6ch device like this
133
+ speech0 = speech_6ch[0] # mic 0
134
+ speech4 = speech_6ch[4] # mic 4
135
+ t_axis = np.arange(speech0.shape[0]) / sr
136
+ plt.figure()
137
+ plt.plot(t_axis, speech0.numpy(), label="Microphone 0")
138
+ plt.plot(t_axis, speech4.numpy(), label="Microphone 4")
139
+ plt.xlabel("Time (s)")
140
+ plt.ylabel("Amplitude")
141
+ plt.legend()
142
+ plt.show()
143
+ ```
144
+
145
+ ## Dataset Details
146
+ The dataset contains three subsets:
147
+ - **Treble10-Speech-mono**: This subset contains reverberant mono speech files, obtained by convolving dry speech signals with mono room impulse responses (RIRs). In each room, RIRs are available between 5 sound sources and several receivers. The receivers are placed along horizontal receiver grids with 0.5 m resolution at three heights (0.5 m, 1.0 m, 1.5 m). The validity of all source and receiver positions is checked to ensure that none of them intersects with the room geometry or furniture.
148
+ - **Treble10-Speech-hoa8**: This subset contains reverberant speech files encoded in 8th-order Ambisonics. These reverberant speech files are obtained by convolving dry speech signals with 8th-order Ambisonics RIRs. The sound sources and receivers are identical to the Speech-mono subset.
149
+ - **Treble10-Speech-6ch**: For this subset, a 6-channel cylindrical device is placed at the receiver positions from the Speech-mono subset. RIRs are then acquired between the 5 sound sources from above and each of the 6 device microphones. In other words, there is a 6-channel DeviceRIR for each source-receiver combination of the Speech-mono subset. Each channel of the DeviceRIR is then convolved with the same dry speech signal, resulting in a 6-channel reverberant speech signal. This 6-channel reverberant speech signal resembles the recordings you would obtain when placing that 6-channel device at the corresponding receiver position and recording speech played back at the source position.
150
+
151
+ All RIRs (mono/HOA/device) that were used to generate reverberant speech for this dataset were simulated with the Treble SDK. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For this dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.
152
+
153
+ All dry speech files that were used to generate reverberant speech files through convolution with the above RIRs were taken from the [LibriSpeech corpus](https://www.openslr.org/12).
154
+
155
+ ## Uses
156
+
157
+ Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the **Treble10-Speech** dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the RIR. To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what **Treble10-Speech** provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.
158
+
159
+ ## Dataset Structure
160
+
161
+ Each subset of **Treble10-Speech** corresponds to a different channel configuration of the simulated room impulse responses (RIRs).
162
+ All subsets share the same metadata schema and organization.
163
+
164
+ |Split | Description | Channels |
165
+ |--------------|---------------------|----------|
166
+ |`speech_mono` | Single-channel reverberant mono speech | 1 |
167
+ |`speech_hoa8` | Reverberant speech encoded as 8th-order Ambisonics (ACN/SN3D format) | 81 |
168
+ |`speech_6ch` | Reverberant speech at the microphones of a six-channel home audio device | 6 |
169
+
170
+
171
+ ### File Contents
172
+
173
+ Each `.parquet` file contains the metadata for one subset (split) of the dataset.
174
+ As this set of reverberant speech signals may be used for a variety of potential audio machine-learning tasks, we leave the actual segmentation of the data to the users.
175
+ The metadata links each reverberant speech file to its corresponding dry speech file and includes detailed acoustic parameters.
176
+
177
+ | Column | Description |
178
+ |---------|-------------|
179
+ | **audio_file** | Reference to the reverberant speech file. |
180
+ | **audio_filename** | Filename and relative path of the RIR WAV file. |
181
+ | **speech_dataset** | Dataset from which the dry speech signal was taken |
182
+ | **speech_dataset_split** | Dataset split from which the dry speech signal was taken |
183
+ | **speech_filename** | Filename and relative path of the dry speech WAV file. |
184
+ | **gender** | Gender of the speaker |
185
+ | **chapter_id** | Chapter ID of the speech file |
186
+ | **transcript** | Transcript of the speech file |
187
+ | **Room** | Short room nickname (e.g., `Room1`, `Room5`). |
188
+ | **Room Description** | Descriptive room type (e.g., `meeting_room`, `living_room`). |
189
+ | **Room Volume [m³]** | Volume of the room in cubic meters. |
190
+ | **Direct Path Length [m]** | Distance between source and receiver. |
191
+ | **Source Label / Position** | Label and 3D coordinates of the source. |
192
+ | **Receiver Label / Position** | Label and 3D coordinates of the receiver. |
193
+ | **Receiver Type** | Receiver configuration (`mono`, `8th order`, or `6-channel`). |
194
+ | **Frequencies, EDT, T30, C50, Average Absorption** | Octave-band acoustic parameters. |
195
+ | **Avg EDT, Avg T30, Avg Absorption** | Broadband summary values. |
196
+ ## Acoustic Parameters
197
+ The RIRs that were used to generate the reverberant speech signals are presented with a few relevant acoustical parameters describing the acoustical field as sampled with the specific source/receiver pairs.
198
+ ### T30: Reverberation Time
199
+ T30 is a measure of how long a sound takes to fade away in a room after the sound source stops emitting noise.
200
+ It is a key measure of how reverberant a space is.
201
+ Specifically, it's the time needed for the sound energy to drop by 60 decibels, estimated from the first 30 dBs of the decay.'
202
+ A short T30 correlates to a "dry" sounding room, like a small office or recording booth (ideally, under 0.2s). A long T30 correlates to a room that sounds "wet", such as a concert hall or parking garage (1.0s or more).
203
+ ### EDT: Early Decay Time
204
+ Early Decay Time is another measure of reverberation, but is calculated from the first 10 dB of energy decay.
205
+ EDT is highly correlated with the psychoacoustic perception of reverberation, and can also provide information about the uniformity of the acoustic field within a space.
206
+ If EDT is approximately equal to T30, the reverberation is approximately a single-slope decay.
207
+ IF EDT is much shorter than T30, this indicates the existence of a double-slope energy decay, which may form when two rooms are acoustically coupled.
208
+ ### C50: Clarity Index (Speech)
209
+ C50 is an energy ratio between the early arriving sound (the first 50 milliseconds) to the late arrinng sound (from 50 milliseconds to the end of the RIR).
210
+ C50 is typically used as a measure of the potential speech intelligibility and clarity of a room, as it quantifies how much the early sound is obscured by the room's reverberation. '
211
+ High C50 values (above 0dB) are typically considered to be ideal for clear and intelligible speech.
212
+ Low C50 values (below 0dB) are typically considered to be difficult for speech clarity.
213
+ ## More Information
214
+ More information on the dataset can be found on the corresponding blog post.
215
+ ## Licensing Information
216
+ The **Treble10-Speech** dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
217
+ The dry speech signals that were used to generate reverberant speech files through convolution with RIRs were taken from the [LibriSpeech dataset](https://www.openslr.org/12), licensed under a Creative Commons 4.0 International license.