--- tags: - audio - speech - vad - humming license: cc-by-sa-4.0 task_categories: - voice-activity-detection language: - en size_categories: - 1K ## 📜 Dataset Creation Strategy To build this dataset, the following methodology was used: 1. **Humming Audio Collection**: Various humming recordings were sourced from the ["MLEnd-Hums-and-Whistles"](https://www.kaggle.com/datasets/jesusrequena/mlend-hums-and-whistles?resource=download) dataset. 2. **Speech Insertion**: Short speech segments were extracted from ["Global Recordings Network"](https://models.silero.ai/vad_datasets/globalrecordings.feather) datasets. 3. **Mixing Strategy**: - Speech can be **longer or shorter than the humming segment**. - Speech is **randomly inserted** at different timestamps in the humming audio. - Speech timestamps were carefully annotated to facilitate supervised learning. ### 🔹 Metadata Explanation The dataset includes the following metadata columns: | Column Name | Description | |----------------------------------|-------------| | `file_name` | The path to the final mixed audio file (humming + speech). | | `speech_ts` | The timestamps where speech appears within the mixed audio file. | | `humming_song` | The song or source from which the humming was derived. | | `humming_Interpreter` | The individual or source providing the humming. More info in [`MLEndHWD_Interpreter_Demographics.csv`](https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend/blob/main/MLEndHWD_Interpreter_Demographics.csv) | | `humming_audio_used` | humming audio path in the original dataset. | | `humming_transcript` | Transcription of the humming from whisper-large-v3-turbo. | | `globalrecordings_audio_used` | Speech segment sourced from Global Recordings Network. | | `globalrecordings_audio_ts_used` | The start and end timestamps of the speech segment in the original recording. | ## 📥 Download and Usage ### 🛠️ Loading the Dataset Since the dataset does not have predefined splits, you can load it using the following code: ```python import pandas as pd from datasets import load_dataset # Load dataset from Hugging Face dataset = load_dataset("CuriousMonkey7/HumSpeechBlend", split=None) # No predefined splits # Load metadata metadata = pd.read_feather("metadata.feather") # Load the Feather metadata print(metadata.head()) ``` ### 🔊 Loading Audio Files To work with the audio files: ```python import torchaudio waveform, sample_rate = torchaudio.load("data/audio1.wav") print(f"Sample Rate: {sample_rate}, Waveform Shape: {waveform.shape}") ``` ## 📄 Citation If you use this dataset, please cite it accordingly. ``` @dataset{HumSpeechBlend, author = {Sourabh Saini}, title = {HumSpeechBlend: Humming vs Speech Dataset}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/CuriousMonkey7/HumSpeechBlend} } ```