Datasets:

Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Treble10-Speech / README.md
ssm-treble's picture
Update README.md
4a46b48 verified
|
raw
history blame
12.4 kB
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: filename
      dtype: string
    - name: room
      dtype: string
    - name: room_description
      dtype: string
    - name: room_volume
      dtype: float32
    - name: source
      dtype: string
    - name: source_position
      list: float32
    - name: receiver
      dtype: string
    - name: receiver_position
      list: string
    - name: direct_path_length_m
      dtype: float32
    - name: rir_format
      dtype: string
    - name: T30
      list: float32
    - name: EDT
      list: float32
    - name: C50
      list: float32
    - name: absorption
      list: float32
    - name: center_frequencies
      list: float32
    - name: librispeech_split
      dtype: string
    - name: librispeech_file
      dtype: string
    - name: transcript
      dtype: string
  splits:
    - name: speech_mono
      num_bytes: 3084458782
      num_examples: 3085
    - name: speech_6ch
      num_bytes: 18498009661
      num_examples: 3085
    - name: speech_hoa8
      num_bytes: 249702505975
      num_examples: 3085
  download_size: 260300266837
  dataset_size: 271284974418
configs:
  - config_name: default
    data_files:
      - split: speech_mono
        path: data/speech_mono-*
      - split: speech_6ch
        path: data/speech_6ch-*
      - split: speech_hoa8
        path: data/speech_hoa8-*
language:
  - en
license: cc-by-nc-sa-4.0
task_categories:
  - automatic-speech-recognition
pretty_name: Treble10-Speech
size_categories:
  - 1K<n<10K

Treble10-Speech (16 kHz)

The Treble10-Speech dataset is a dataset for automatic speech recognition (ASR), containing pre-convolved speech files using high fidelity room-acoustic simulations from 10 different furnished rooms: 2 bathrooms, 2 bedrooms, 2 living rooms with hallway, 2 living rooms without hallway, 2 meeting rooms. The room volumes range between 14 and 46 m3, resulting in reverberation times between 0.17 and 0.84 s.

Examples: accessing a reverberant mono speech file:

from datasets import load_dataset, Audio
import matplotlib.pyplot as plt
import numpy as np
ds = load_dataset(
    "treble-technologies/Treble10-Speech",
    split="speech_mono",
    streaming=True,
)
ds = ds.cast_column("audio", Audio())
# Read the samples from the TorchCodec decoder object:
rec = next(iter(ds))
samples = rec["audio"].get_all_samples()
speech_mono = samples.data
sr = samples.sample_rate
print(f"Mono speech has this shape: {speech_mono.shape}, and a sampling rate of {sr} Hz.")
# We can access and compare individual channels from the mono device like this
t_axis = np.arange(speech_mono.shape[1]) / sr
plt.figure()
plt.plot(t_axis, speech_mono.numpy().T, label="Mono speech")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.legend()
plt.show()

Example: Accessing a reverberant speech file encoded to 8th-order Ambisonics

from datasets import load_dataset, Audio
ds = load_dataset(
    "treble-technologies/Treble10-Speech",
    split="speech_hoa8",
    streaming=True,
)
# Keep native sampling rate (don’t ask TorchCodec to resample)
ds = ds.cast_column("audio", Audio())
rec = next(iter(ds))
# Read the samples from the TorchCodec decoder object:
samples = rec["audio"].get_all_samples()
speech = samples.data.cpu().numpy()  # shape: (frames, channels), HOA preserved
sr = samples.sample_rate

Example: Accessing a reverberant speech file at the microphones of a 6-channel device

from datasets import load_dataset, Audio
import matplotlib.pyplot as plt
import numpy as np
ds = load_dataset(
    "treble-technologies/Treble10-Speech",
    split="speech_6ch",
    streaming=True,
)
ds = ds.cast_column("audio", Audio())
# Read the samples from the TorchCodec decoder object:
rec = next(iter(ds))
samples = rec["audio"].get_all_samples()
speech_6ch = samples.data
sr = samples.sample_rate
print(f"6 channel speech has this shape: {speech_6ch.shape}, and a sampling rate of {sr} Hz.")
# We can access and compare individual channels from the 6ch device like this
speech0 = speech_6ch[0]  # mic 0
speech4 = speech_6ch[4]  # mic 4
t_axis = np.arange(speech0.shape[0]) / sr
plt.figure()
plt.plot(t_axis, speech0.numpy(), label="Microphone 0")
plt.plot(t_axis, speech4.numpy(), label="Microphone 4")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.legend()
plt.show()

Dataset Details

The dataset contains three subsets:

  • Treble10-Speech-mono: This subset contains reverberant mono speech files, obtained by convolving dry speech signals with mono room impulse responses (RIRs). In each room, RIRs are available between 5 sound sources and several receivers. The receivers are placed along horizontal receiver grids with 0.5 m resolution at three heights (0.5 m, 1.0 m, 1.5 m). The validity of all source and receiver positions is checked to ensure that none of them intersects with the room geometry or furniture.
  • Treble10-Speech-hoa8: This subset contains reverberant speech files encoded in 8th-order Ambisonics. These reverberant speech files are obtained by convolving dry speech signals with 8th-order Ambisonics RIRs. The sound sources and receivers are identical to the Speech-mono subset.
  • Treble10-Speech-6ch: For this subset, a 6-channel cylindrical device is placed at the receiver positions from the Speech-mono subset. RIRs are then acquired between the 5 sound sources from above and each of the 6 device microphones. In other words, there is a 6-channel DeviceRIR for each source-receiver combination of the Speech-mono subset. Each channel of the DeviceRIR is then convolved with the same dry speech signal, resulting in a 6-channel reverberant speech signal. This 6-channel reverberant speech signal resembles the recordings you would obtain when placing that 6-channel device at the corresponding receiver position and recording speech played back at the source position.

All RIRs (mono/HOA/device) that were used to generate reverberant speech for this dataset were simulated with the Treble SDK. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For this dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.

All dry speech files that were used to generate reverberant speech files through convolution with the above RIRs were taken from the LibriSpeech corpus.

Uses

Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the Treble10-Speech dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the RIR. To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what Treble10-Speech provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.

Dataset Structure

Each subset of Treble10-Speech corresponds to a different channel configuration of the simulated room impulse responses (RIRs). All subsets share the same metadata schema and organization.

Split Description Channels
speech_mono Single-channel reverberant mono speech 1
speech_hoa8 Reverberant speech encoded as 8th-order Ambisonics (ACN/SN3D format) 81
speech_6ch Reverberant speech at the microphones of a six-channel home audio device 6

File Contents

Each .parquet file contains the metadata for one subset (split) of the dataset.
As this set of reverberant speech signals may be used for a variety of potential audio machine-learning tasks, we leave the actual segmentation of the data to the users. The metadata links each reverberant speech file to its corresponding dry speech file and includes detailed acoustic parameters.

Column Description
audio The convolved speech file.
audio_filename Filename and relative path of the RIR WAV file.
room Short room nickname (e.g., Room1, Room5).
room_description Descriptive room type (e.g., meeting_room, living_room).
room_volume Volume of the room in cubic meters.
source Label of the source.
source_position 3D coordinates of the source in meters.
receiver Label of the receiver.
receiver_position 3D coordinates of the receiver in meters.
direct_path_length Distance between source and receiver in meters.
rir_format Format of the RIR used (mono, 6ch, or hoa8)
Frequencies, EDT, T30, C50, Average Absorption Octave-band acoustic parameters.
librispeech_split Source split of the dry speech signal.
librispeech_file The file path and name of the dry signal as local to the Librispeech dataset.
transcript The transcript of the utterance.

Acoustic Parameters

The RIRs that were used to generate the reverberant speech signals are presented with a few relevant acoustical parameters describing the acoustical field as sampled with the specific source/receiver pairs.

T30: Reverberation Time

T30 is a measure of how long a sound takes to fade away in a room after the sound source stops emitting noise. It is a key measure of how reverberant a space is. Specifically, it's the time needed for the sound energy to drop by 60 decibels, estimated from the first 30 dBs of the decay.' A short T30 correlates to a "dry" sounding room, like a small office or recording booth (ideally, under 0.2s). A long T30 correlates to a room that sounds "wet", such as a concert hall or parking garage (1.0s or more).

EDT: Early Decay Time

Early Decay Time is another measure of reverberation, but is calculated from the first 10 dB of energy decay. EDT is highly correlated with the psychoacoustic perception of reverberation, and can also provide information about the uniformity of the acoustic field within a space. If EDT is approximately equal to T30, the reverberation is approximately a single-slope decay. IF EDT is much shorter than T30, this indicates the existence of a double-slope energy decay, which may form when two rooms are acoustically coupled.

C50: Clarity Index (Speech)

C50 is an energy ratio between the early arriving sound (the first 50 milliseconds) to the late arrinng sound (from 50 milliseconds to the end of the RIR). C50 is typically used as a measure of the potential speech intelligibility and clarity of a room, as it quantifies how much the early sound is obscured by the room's reverberation. ' High C50 values (above 0dB) are typically considered to be ideal for clear and intelligible speech. Low C50 values (below 0dB) are typically considered to be difficult for speech clarity.

More Information

More information on the dataset can be found on the corresponding blog post.

Licensing Information

The Treble10-Speech dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license. The dry speech signals that were used to generate reverberant speech files through convolution with RIRs were taken from the LibriSpeech dataset, licensed under a Creative Commons 4.0 International license.