Datasets:

Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Treble10-Speech / README.md
ssm-treble's picture
Update README.md
ecbe24d verified
metadata
arxiv: 2510.23141
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: filename
      dtype: string
    - name: room
      dtype: string
    - name: room_description
      dtype: string
    - name: room_volume
      dtype: float32
    - name: source
      dtype: string
    - name: source_position
      list: float32
    - name: receiver
      dtype: string
    - name: receiver_position
      list: float32
    - name: direct_path_length_m
      dtype: float32
    - name: rir_format
      dtype: string
    - name: T30
      list: float32
    - name: EDT
      list: float32
    - name: C50
      list: float32
    - name: absorption
      list: float32
    - name: center_frequencies
      list: float32
    - name: librispeech_split
      dtype: string
    - name: librispeech_file
      dtype: string
    - name: transcript
      dtype: string
  splits:
    - name: speech_mono
      num_bytes: 3084427987
      num_examples: 3085
    - name: speech_6ch
      num_bytes: 18498009661
      num_examples: 3085
    - name: speech_hoa8
      num_bytes: 249702475180
      num_examples: 3085
  download_size: 273472049365
  dataset_size: 271284912828
configs:
  - config_name: default
    data_files:
      - split: speech_mono
        path: data/speech_mono-*
      - split: speech_6ch
        path: data/speech_6ch-*
      - split: speech_hoa8
        path: data/speech_hoa8-*
language:
  - en
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
pretty_name: Treble10-Speech
size_categories:
  - 1K<n<10K
tags:
  - audio
  - speech
  - acoustics
source_datasets:
  - treble-technologies/Treble10-RIR
  - openslr/librispeech_asr

Treble10-Speech (16 kHz)

Dataset Description

Open In Colab

The Treble10-Speech dataset is a dataset for automatic speech recognition (ASR), containing pre-convolved speech files using high fidelity room-acoustic simulations from the Treble10-RIR dataset with 10 different furnished rooms: 2 bathrooms, 2 bedrooms, 2 living rooms with hallway, 2 living rooms without hallway, 2 meeting rooms. The room volumes range between 14 and 46 m3, resulting in reverberation times between 0.17 and 0.84 s.

Examples: accessing a reverberant mono speech file:

from datasets import load_dataset, Audio
import matplotlib.pyplot as plt
import numpy as np
ds = load_dataset(
    "treble-technologies/Treble10-Speech",
    split="speech_mono",
    streaming=True,
)
ds = ds.cast_column("audio", Audio())
# Read the samples from the TorchCodec decoder object:
rec = next(iter(ds))
samples = rec["audio"].get_all_samples()
speech_mono = samples.data
sr = samples.sample_rate
print(f"Mono speech has this shape: {speech_mono.shape}, and a sampling rate of {sr} Hz.")
# We can access and compare individual channels from the mono device like this
t_axis = np.arange(speech_mono.shape[1]) / sr
plt.figure()
plt.plot(t_axis, speech_mono.numpy().T, label="Mono speech")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.legend()
plt.show()

Example: Accessing a reverberant speech file encoded to 8th-order Ambisonics

from datasets import load_dataset, Audio
import io, soundfile as sf

# Load dataset in streaming mode
ds = load_dataset("treble-technologies/Treble10-Speech", split="speech_hoa8", streaming=True)

# Disable automatic decoding (we'll do it manually)
ds = ds.cast_column("audio", Audio(decode=False))

# Get one sample from the iterator
sample = next(iter(ds))

# Fetch raw audio bytes
audio_bytes = sample["audio"]["bytes"]

# Some older datasets may not have "bytes", so fall back to reading from the file
if audio_bytes is None:
    # Use huggingface's file object directly
    with sample["audio"]["path"].open("rb") as f:
        audio_bytes = f.read()

# Decode the HOA audio directly from memory
rir_hoa, sr = sf.read(io.BytesIO(audio_bytes))
print(f"Loaded HOA audio: shape={rir_hoa.shape}, sr={sr}")

Example: Accessing a reverberant speech file at the microphones of a 6-channel device

from datasets import load_dataset, Audio
import matplotlib.pyplot as plt
import numpy as np
ds = load_dataset(
    "treble-technologies/Treble10-Speech",
    split="speech_6ch",
    streaming=True,
)
ds = ds.cast_column("audio", Audio())
# Read the samples from the TorchCodec decoder object:
rec = next(iter(ds))
samples = rec["audio"].get_all_samples()
speech_6ch = samples.data
sr = samples.sample_rate
print(f"6 channel speech has this shape: {speech_6ch.shape}, and a sampling rate of {sr} Hz.")
# We can access and compare individual channels from the 6ch device like this
speech0 = speech_6ch[0]  # mic 0
speech4 = speech_6ch[4]  # mic 4
t_axis = np.arange(speech0.shape[0]) / sr
plt.figure()
plt.plot(t_axis, speech0.numpy(), label="Microphone 0")
plt.plot(t_axis, speech4.numpy(), label="Microphone 4")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.legend()
plt.show()

Dataset Details

The dataset contains three subsets:

  • Treble10-Speech-mono: This subset contains reverberant mono speech files, obtained by convolving dry speech signals with mono room impulse responses (RIRs). In each room, RIRs are available between 5 sound sources and several receivers. The receivers are placed along horizontal receiver grids with 0.5 m resolution at three heights (0.5 m, 1.0 m, 1.5 m). The validity of all source and receiver positions is checked to ensure that none of them intersects with the room geometry or furniture.
  • Treble10-Speech-hoa8: This subset contains reverberant speech files encoded in 8th-order Ambisonics. These reverberant speech files are obtained by convolving dry speech signals with 8th-order Ambisonics RIRs. The sound sources and receivers are identical to the Speech-mono subset.
  • Treble10-Speech-6ch: For this subset, a 6-channel cylindrical device is placed at the receiver positions from the Speech-mono subset. RIRs are then acquired between the 5 sound sources from above and each of the 6 device microphones. In other words, there is a 6-channel DeviceRIR for each source-receiver combination of the Speech-mono subset. Each channel of the DeviceRIR is then convolved with the same dry speech signal, resulting in a 6-channel reverberant speech signal. This 6-channel reverberant speech signal resembles the recordings you would obtain when placing that 6-channel device at the corresponding receiver position and recording speech played back at the source position.

All RIRs (mono/HOA/device) that were used to generate reverberant speech for this dataset were simulated with the Treble SDK. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For this dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.

All dry speech files that were used to generate reverberant speech files through convolution with the above RIRs were taken from the test splits of the LibriSpeech corpus. As the dry speech files were sampled at 16 kHz, the RIRs were downsampled while generating the Treble10-Speech set. You can create your own 32kHz speech samples by downloading the Treble10-RIR dataset and convolving them with audio signals of your choice.

Uses

Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the Treble10-Speech dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the RIR. To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what Treble10-Speech provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.

Dataset Structure

Each subset of Treble10-Speech corresponds to a different channel configuration of the simulated room impulse responses (RIRs). All subsets share the same metadata schema and organization.

Split Description Channels
speech_mono Single-channel reverberant mono speech 1
speech_hoa8 Reverberant speech encoded as 8th-order Ambisonics (ACN/SN3D format) 81
speech_6ch Reverberant speech at the microphones of a six-channel home audio device 6

The six-channel device has microphones positioned at the following locatiosn relative to the center of the device:

Channel Position [m]
0 [0.03, 0., 0.]
1 [0.015.. 0.026., 0.]
2 [-0.0145, 0.026, 0.]
3 [-0.03, 0., 0.]
4 [-0.015, -0.026, 0.]
5 [0.015, -0.026, 0.]

File Contents

Each .parquet file contains the metadata for one subset (split) of the dataset.
As this set of reverberant speech signals may be used for a variety of potential audio machine-learning tasks, we leave the actual segmentation of the data to the users. The metadata links each reverberant speech file to its corresponding dry speech file and includes detailed acoustic parameters.

Column Description
audio The convolved speech file.
audio_filename Filename and relative path of the RIR WAV file.
room Short room nickname (e.g., Room1, Room5).
room_description Descriptive room type (e.g., meeting_room, living_room).
room_volume Volume of the room in cubic meters.
source Label of the source.
source_position 3D coordinates of the source in meters.
receiver Label of the receiver.
receiver_position 3D coordinates of the receiver in meters.
direct_path_length Distance between source and receiver in meters.
rir_format Format of the RIR used (mono, 6ch, or hoa8)
Frequencies, EDT, T30, C50, Average Absorption Octave-band acoustic parameters.
librispeech_split Source split of the dry speech signal.
librispeech_file The file path and name of the dry signal as local to the Librispeech dataset.
transcript The transcript of the utterance.

Acoustic Parameters

The RIRs that were used to generate the reverberant speech signals are presented with a few relevant acoustical parameters describing the acoustical field as sampled with the specific source/receiver pairs.

T30: Reverberation Time

T30 is a measure of how long a sound takes to fade away in a room after the sound source stops emitting noise. It is a key measure of how reverberant a space is. Specifically, it's the time needed for the sound energy to drop by 60 decibels, estimated from the first 30 dBs of the decay.' A short T30 correlates to a "dry" sounding room, like a small office or recording booth (ideally, under 0.2s). A long T30 correlates to a room that sounds "wet", such as a concert hall or parking garage (1.0s or more).

EDT: Early Decay Time

Early Decay Time is another measure of reverberation, but is calculated from the first 10 dB of energy decay. EDT is highly correlated with the psychoacoustic perception of reverberation, and can also provide information about the uniformity of the acoustic field within a space. If EDT is approximately equal to T30, the reverberation is approximately a single-slope decay. IF EDT is much shorter than T30, this indicates the existence of a double-slope energy decay, which may form when two rooms are acoustically coupled.

C50: Clarity Index (Speech)

C50 is an energy ratio between the early arriving sound (the first 50 milliseconds) to the late arrinng sound (from 50 milliseconds to the end of the RIR). C50 is typically used as a measure of the potential speech intelligibility and clarity of a room, as it quantifies how much the early sound is obscured by the room's reverberation. ' High C50 values (above 0dB) are typically considered to be ideal for clear and intelligible speech. Low C50 values (below 0dB) are typically considered to be difficult for speech clarity.

More Information

More information on the dataset can be found on the corresponding blog post.

Licensing Information

The Treble10-Speech dataset combines two components with different licenses: Speech recordings (dry signals) — sourced from the LibriSpeech corpus, licensed under the Creative Commons Attribution 4.0 International License (CC-BY-4.0). Acoustic impulse responses (RIRs) and acoustical metadata — originating from the Treble10-RIR dataset, licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license. The convolved (“wet”) speech recordings in this dataset are derivative works that combine both sources. As a result, they are governed by the CC-BY-4.0 license. The room impulse responses and all acoustical metadata associated with them remain governed by the (CC-BY-NC-SA-4.0).

Citation Information

@misc{mullins2025treble10highqualitydatasetfarfield,
      title={Treble10: A high-quality dataset for far-field speech recognition, dereverberation, and enhancement}, 
      author={Sarabeth S. Mullins and Georg G\"otz and Eric Bezzam and Steven Zheng and Daniel Gert Nielsen},
      year={2025},
      eprint={2510.23141},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2510.23141}, 
}