audio
audioduration (s) 6.24
29.1
|
---|
StepEval-Audio-Paralinguistic Dataset
Overview
StepEval-Audio-Paralinguistic is a speech-to-speech benchmark designed to evaluate AI models' understanding of paralinguistic information in speech across 11 distinct dimensions. The dataset contains 550 carefully curated and annotated speech samples for assessing capabilities beyond semantic understanding.
Key Features
- Comprehensive coverage: 11 paralinguistic dimensions with 50 samples each
- Diverse sources: Combines podcast recordings with AudioSet, CochlScene, and VocalSound samples
- High-quality annotations: Professionally verified open-set natural language descriptions
- Challenging construction: Includes synthesized question mixing and audio augmentation
- Standardized evaluation: Comes with automatic evaluation protocols
Dataset Composition
Core Categories
Basic Attributes
- Gender identification
- Age classification
- Timbre description
Speech Characteristics
- Emotion recognition
- Pitch classification
- Rhythm patterns
- Speaking speed
- Speaking style
Environmental Sounds
- Scenario detection
- Sound event recognition
- Vocal sound identification
Task Categories and Label Distributions
Category | Task Description | Label Distribution | Total Samples |
---|---|---|---|
Gender | Identify speaker's gender | Male: 25, Female: 25 | 50 |
Age | Classify speaker's age | 20y:6, 25y:6, 30y:5, 35y:5, 40y:5, 45y:4, 50y:4 + Child:7, Elderly:8 | 50 |
Speed | Categorize speaking speed | Slow:10, Medium-slow:10, Medium:10, Medium-fast:10, Fast:10 | 50 |
Emotion | Recognize emotional states | Anger, Joy, Sadness, Surprise, Sarcasm, etc. (50 manually annotated) | 50 |
Scenarios | Detect background scenes | Indoor:14, Outdoor:12, Restaurant:6, Kitchen:6, Park:6, Subway:6 | 50 |
Vocal | Identify non-speech vocal effects | Cough:14, Sniff:8, Sneeze:7, Throat-clearing:6, Laugh:5, Sigh:5, Other:5 | 50 |
Style | Distinguish speaking styles | Dialogue:4, Discussion:4, Narration:8, Commentary:8, Colloquial:8, Speech:8, Other:10 | 50 |
Rhythm | Characterize rhythm patterns | Steady:10, Fluent:10, Paused:10, Hurried:10, Fluctuating:10 | 50 |
Pitch | Classify dominant pitch ranges | Mid:12, Mid-high:14, High:12, Mid-low:12 | 50 |
Event | Recognize non-vocal audio events | Music:8, Other events:42 (from AudioSet) | 50 |
Dataset Notes:
- Total samples: 550 (50 per category × 11 categories)
- Underrepresented categories were augmented to ensure diversity
- Scene/event categories use synthetic audio mixing with controlled parameters
- All audio samples are ≤30 seconds in duration
Data Collection & Processing
Preprocessing Pipeline
- All audio resampled to 24,000 Hz
- Strict duration control (≤30 seconds)
- Demographic balancing for underrepresented groups
- Professional annotation verification
Special Enhancements
- Scenario: 6 environmental types mixed (from CochlScene)
- Event: AudioSet samples mixed
- Vocal: 7 paralinguistic types inserted (from VocalSound)
Dataset Construction
- Collected raw speech samples from diverse sources
- Generated text-based QA pairs aligned with annotations
- Converted QAs to audio using TTS synthesis
- Randomly inserted question clips before/after original utterances
- For environmental sounds: additional audio mixing before question concatenation
Evaluation Protocol
The benchmark evaluation follows a standardized three-phase process:
1. Model Response Collection
Audio-in/audio-out models are queried through their APIs using the original audio files as input. Each 24kHz audio sample (≤30s duration) generates a corresponding response audio, saved with matching filenames for traceability.
2. Speech-to-Text Conversion
All model response audios are transcribed using a ASR system. Transcripts undergo automatic text normalization and are stored.
3. Automated Assessment
The evaluation script (LLM_judge.py
) compares ASR transcripts against ground truth annotations using an LLM judge. Scoring considers semantic similarity rather than exact matches, with partial credit for partially correct responses. The final metrics include per-category accuracy scores.
Benchmark Results on StepEval-Audio-Paralinguistic
Model | Avg | Gender | Age | Timbre | Scenario | Event | Emotion | Pitch | Rhythm | Speed | Style | Vocal |
---|---|---|---|---|---|---|---|---|---|---|---|---|
GPT-4o Audio | 43.45 | 18 | 42 | 34 | 22 | 14 | 82 | 40 | 60 | 58 | 64 | 44 |
Kimi-Audio | 49.64 | 94 | 50 | 10 | 30 | 48 | 66 | 56 | 40 | 44 | 54 | 54 |
Qwen-Omni | 44.18 | 40 | 50 | 16 | 28 | 42 | 76 | 32 | 54 | 50 | 50 | 48 |
Step-Audio-AQAA | 36.91 | 70 | 66 | 18 | 14 | 14 | 40 | 38 | 48 | 54 | 44 | 0 |
Step-Audio 2 | 76.55 | 98 | 92 | 78 | 64 | 46 | 72 | 78 | 70 | 78 | 84 | 82 |
- Downloads last month
- 712