Datasets:
Statement Of Purpose
Purpose
This collection, consisting of voice notes recorded by Daniel Rosehill using Voicenotes.com (among others), is specifically gathered to evaluate and improve the robustness of Speech-to-Text (STT) systems under non-ideal, real-world conditions. Unlike studio-quality audio used for training, these notes often contain various types of background noise, overlapping conversations, and environmental distortions typical of everyday recording scenarios.
This dataset is intended to serve the following objectives:
1: Personal STT Fine Tune
This dataset incorporates a subset of notes that I am using for an STT fine-tune. Objective: improve speech recognition accuracy for personal voice notes by creating a refined transcription model tailored to individual speech patterns and common recording environments.
2: Entity Recognition ML For Voice Note Project (Personal WIP)
Develop a specialized model for a work in progress voice backend app to classify and identify entities within voice note recordings, enabling intelligent routing and categorization of voice-based content by common type (task list, calendar appointment, email draft, etc).
3: Public Dataset
These voice notes and annotations will be made public to the fullest extent possible with voice notes excluded only where they contain PII or due to a lack of time.
Preprocessing will involve automatic scan logic (in progress).
4: STT Evaluation Data
I have conducted some of my own small-scale STT evals by recording synthetic data voice notes.
However, I'm very interested in exploring the extent to which perform under the kind of gritty "real world" conditions that I and other active STT uesrs routinely subject the models to.
Things like:
- Lousy internal phone mics (versus studio conditions with great micorphones)
- Acoustic background challenges
- Audio that, through deductive logic, can be inferred to constitute user remarks not inded for transcription (advanced)
This data is targeted in the annotations.
5: Research Questions
The detailed annotations outlined and verison controlled here plan to create a comprehensive framework for evaluating how various STT models compare when challenged by a litany of "real world" conditions. These include crying babies (I'm a new father!), honking motorists, background conversations (in the user's and other languages)
The (source) dataset contains approximately 700 voice notes totaling 13 hours of audio. Each audio file comes with an AI-generated transcript provided by Voicenotes.com's STT service, serving as a baseline for comparison. A subset of these transcripts will be manually corrected to create a high-quality ground truth dataset for fine-tuning STT models and developing a comprehensive, nuanced speech recognition research and development framework focused on real-world voice note transcription challenges.