Datasets:
metadata
tags:
- audio
- speech
- vad
- humming
license: cc-by-sa-4.0
task_categories:
- voice-activity-detection
language:
- en
size_categories:
- 1K<n<10K
WORK IN PROGRESS
[WIP]HumSpeechBlend Dataset: Humming vs Speech Detection
π Overview
HumSpeechBlend is a dataset designed to fine-tune Voice Activity Detection (VAD) models to distinguish between humming and actual speech. Current VAD models often misclassify humming as speech, leading to incorrect segmentation in speech processing tasks. This dataset provides a structured collection of humming audio interspersed with speech to help improve model accuracy.
π Dataset Creation Strategy
To build this dataset, the following methodology was used:
- Humming Audio Collection: Various humming recordings were sourced from the "MLEnd-Hums-and-Whistles" dataset.
- Speech Insertion: Short speech segments were extracted from "Global Recordings Network" datasets.
- Mixing Strategy:
- Speech can be longer or shorter than the humming segment.
- Speech is randomly inserted at different timestamps in the humming audio.
- Speech timestamps were carefully annotated to facilitate supervised learning.
πΉ Metadata Explanation
The dataset includes the following metadata columns:
Column Name | Description |
---|---|
file_name |
The path to the final mixed audio file (humming + speech). |
speech_ts |
The timestamps where speech appears within the mixed audio file. |
humming_song |
The song or source from which the humming was derived. |
humming_Interpreter |
The individual or source providing the humming. More info in MLEndHWD_Interpreter_Demographics.csv |
humming_audio_used |
humming audio path in the original dataset. |
humming_transcript |
Transcription of the humming from whisper-large-v3-turbo. |
globalrecordings_audio_used |
Speech segment sourced from Global Recordings Network. |
globalrecordings_audio_ts_used |
The start and end timestamps of the speech segment in the original recording. |
π₯ Download and Usage
π οΈ Loading the Dataset
Since the dataset does not have predefined splits, you can load it using the following code:
import pandas as pd
from datasets import load_dataset
# Load dataset from Hugging Face
dataset = load_dataset("CuriousMonkey7/HumSpeechBlend", split=None) # No predefined splits
# Load metadata
metadata = pd.read_feather("metadata.feather") # Load the Feather metadata
print(metadata.head())
π Loading Audio Files
To work with the audio files:
import torchaudio
waveform, sample_rate = torchaudio.load("data/audio1.wav")
print(f"Sample Rate: {sample_rate}, Waveform Shape: {waveform.shape}")
π Citation
If you use this dataset, please cite it accordingly.
@dataset{HumSpeechBlend,
author = {Sourabh Saini},
title = {HumSpeechBlend: Humming vs Speech Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend}
}