Update README.md
Browse files
README.md
CHANGED
|
@@ -32,14 +32,16 @@ tags:
|
|
| 32 |
</p>
|
| 33 |
|
| 34 |
## Introduction
|
|
|
|
| 35 |
|
| 36 |
-
|
| 37 |
-
when compared to other state-of-the-art AudioLLMs such as Qwen2.5-Omni-7B, Phi-4-multimodal-instruct. It is tailored to follow **complex instructions** with a deep understanding of **Singapore’s multilingual and multicultural landscape**.
|
| 38 |
|
| 39 |
<img src="radar_task.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
|
|
|
|
|
|
| 43 |
|
| 44 |
<img src="radar_asr.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 45 |
|
|
@@ -48,15 +50,15 @@ We also provide [MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B) t
|
|
| 48 |
|
| 49 |
- **Extended Audio Length**: Support audio inputs up to 300 seconds (5 minutes) for audio & speech question answering tasks, **30s for a satisfactory performance for speech transcription (ASR) and speech translation (ST) tasks**.
|
| 50 |
|
| 51 |
-
- **Expanded Language Coverage**: In addition to English, Chinese, and Singlish, V2 introduces support for Malay, Tamil, and other
|
| 52 |
|
| 53 |
-
- **Improved Performance**: Achieves higher performance across a wide range of tasks. See the Evaluation section for detailed benchmarks.
|
| 54 |
|
| 55 |
- **Higher Quality Training Data**: Trained on 120,000 hours of curated speech and audio data, filtered for quality and diversity, with an emphasis on local and multilingual audio sources.
|
| 56 |
|
| 57 |
- **Three Model Variants**: Available in general-purpose ([MERaLiON-2-10B](https://huggingface.co/MERaLiON/MERaLiON-2-10B)), ASR-optimized ([MERaLiON-2-10B-ASR](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR)) and light-weight ([MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B)) configurations to balance latency, compute efficiency, and task performance across different deployment needs.
|
| 58 |
|
| 59 |
-
##
|
| 60 |
|
| 61 |
MERaLiON stands for **M**ultimodal **E**mpathetic **R**easoning **a**nd **L**earning **i**n **O**ne **N**etwork.
|
| 62 |
|
|
@@ -77,14 +79,14 @@ The model supports long-form audio inputs of up to 300 seconds (5 minutes) and i
|
|
| 77 |
**MERaLiON-2** is an upgraded version of [MERaLiON-AudioLLM](https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION).
|
| 78 |
|
| 79 |
|
| 80 |
-
##
|
| 81 |
|
| 82 |
-
We benchmark MERaLiON-2 series models with extended [AudioBench benchmark](https://
|
| 83 |
|
| 84 |
|
| 85 |
**Better Automatic Speech Recognition (ASR) Accuracy**
|
| 86 |
|
| 87 |
-
MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlish, Mandarin, Malay, Tamil, and other Southeast Asian languages, while maintaining competitive results in English compared to `Whisper-large-v3`.
|
| 88 |
|
| 89 |
<style type="text/css">
|
| 90 |
#T_0910c th {
|
|
@@ -97,7 +99,6 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 97 |
}
|
| 98 |
#T_0910c_row0_col1, #T_0910c_row1_col1, #T_0910c_row2_col1, #T_0910c_row3_col1, #T_0910c_row4_col1, #T_0910c_row5_col1, #T_0910c_row6_col1, #T_0910c_row7_col1, #T_0910c_row8_col1 {
|
| 99 |
text-align: center;
|
| 100 |
-
background-color: #06a2a2;
|
| 101 |
}
|
| 102 |
#T_0910c_row0_col2, #T_0910c_row0_col3, #T_0910c_row0_col4, #T_0910c_row0_col5, #T_0910c_row0_col6, #T_0910c_row0_col7, #T_0910c_row0_col8, #T_0910c_row0_col9, #T_0910c_row0_col10, #T_0910c_row0_col11, #T_0910c_row1_col2, #T_0910c_row1_col3, #T_0910c_row1_col4, #T_0910c_row1_col5, #T_0910c_row1_col6, #T_0910c_row1_col7, #T_0910c_row1_col8, #T_0910c_row1_col9, #T_0910c_row1_col10, #T_0910c_row1_col11, #T_0910c_row2_col2, #T_0910c_row2_col3, #T_0910c_row2_col4, #T_0910c_row2_col5, #T_0910c_row2_col6, #T_0910c_row2_col7, #T_0910c_row2_col8, #T_0910c_row2_col9, #T_0910c_row2_col10, #T_0910c_row2_col11, #T_0910c_row3_col2, #T_0910c_row3_col3, #T_0910c_row3_col4, #T_0910c_row3_col5, #T_0910c_row3_col6, #T_0910c_row3_col7, #T_0910c_row3_col8, #T_0910c_row3_col9, #T_0910c_row3_col10, #T_0910c_row3_col11, #T_0910c_row4_col2, #T_0910c_row4_col3, #T_0910c_row4_col4, #T_0910c_row4_col5, #T_0910c_row4_col6, #T_0910c_row4_col7, #T_0910c_row4_col8, #T_0910c_row4_col9, #T_0910c_row4_col10, #T_0910c_row4_col11, #T_0910c_row5_col2, #T_0910c_row5_col3, #T_0910c_row5_col4, #T_0910c_row5_col5, #T_0910c_row5_col6, #T_0910c_row5_col7, #T_0910c_row5_col8, #T_0910c_row5_col9, #T_0910c_row5_col10, #T_0910c_row5_col11, #T_0910c_row6_col0, #T_0910c_row6_col2, #T_0910c_row6_col3, #T_0910c_row6_col4, #T_0910c_row6_col5, #T_0910c_row6_col6, #T_0910c_row6_col8, #T_0910c_row6_col9, #T_0910c_row6_col10, #T_0910c_row6_col11, #T_0910c_row7_col2, #T_0910c_row7_col3, #T_0910c_row7_col4, #T_0910c_row7_col5, #T_0910c_row7_col6, #T_0910c_row7_col7, #T_0910c_row7_col8, #T_0910c_row7_col9, #T_0910c_row7_col10, #T_0910c_row7_col11, #T_0910c_row8_col2, #T_0910c_row8_col3, #T_0910c_row8_col4, #T_0910c_row8_col5, #T_0910c_row8_col6, #T_0910c_row8_col7, #T_0910c_row8_col8, #T_0910c_row8_col9, #T_0910c_row8_col10, #T_0910c_row8_col11 {
|
| 103 |
text-align: center;
|
|
@@ -123,7 +124,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 123 |
</thead>
|
| 124 |
<tbody>
|
| 125 |
<tr>
|
| 126 |
-
<th id="T_0910c_level0_row0" class="row_heading level0 row0" >
|
| 127 |
<td id="T_0910c_row0_col0" class="data row0 col0" >0.096526</td>
|
| 128 |
<td id="T_0910c_row0_col1" class="data row0 col1" >0.109365</td>
|
| 129 |
<td id="T_0910c_row0_col2" class="data row0 col2" >0.107279</td>
|
|
@@ -138,7 +139,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 138 |
<td id="T_0910c_row0_col11" class="data row0 col11" >1.510068</td>
|
| 139 |
</tr>
|
| 140 |
<tr>
|
| 141 |
-
<th id="T_0910c_level0_row1" class="row_heading level0 row1" >
|
| 142 |
<td id="T_0910c_row1_col0" class="data row1 col0" >0.271279</td>
|
| 143 |
<td id="T_0910c_row1_col1" class="data row1 col1" >0.327081</td>
|
| 144 |
<td id="T_0910c_row1_col2" class="data row1 col2" >0.344081</td>
|
|
@@ -153,7 +154,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 153 |
<td id="T_0910c_row1_col11" class="data row1 col11" >1.876722</td>
|
| 154 |
</tr>
|
| 155 |
<tr>
|
| 156 |
-
<th id="T_0910c_level0_row2" class="row_heading level0 row2" >
|
| 157 |
<td id="T_0910c_row2_col0" class="data row2 col0" >0.129830</td>
|
| 158 |
<td id="T_0910c_row2_col1" class="data row2 col1" >0.168813</td>
|
| 159 |
<td id="T_0910c_row2_col2" class="data row2 col2" >0.180395</td>
|
|
@@ -168,7 +169,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 168 |
<td id="T_0910c_row2_col11" class="data row2 col11" >0.448863</td>
|
| 169 |
</tr>
|
| 170 |
<tr>
|
| 171 |
-
<th id="T_0910c_level0_row3" class="row_heading level0 row3" >
|
| 172 |
<td id="T_0910c_row3_col0" class="data row3 col0" >0.194638</td>
|
| 173 |
<td id="T_0910c_row3_col1" class="data row3 col1" >0.209074</td>
|
| 174 |
<td id="T_0910c_row3_col2" class="data row3 col2" >0.279891</td>
|
|
@@ -183,7 +184,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 183 |
<td id="T_0910c_row3_col11" class="data row3 col11" >3.762933</td>
|
| 184 |
</tr>
|
| 185 |
<tr>
|
| 186 |
-
<th id="T_0910c_level0_row4" class="row_heading level0 row4" >
|
| 187 |
<td id="T_0910c_row4_col0" class="data row4 col0" >0.078544</td>
|
| 188 |
<td id="T_0910c_row4_col1" class="data row4 col1" >0.088259</td>
|
| 189 |
<td id="T_0910c_row4_col2" class="data row4 col2" >0.122295</td>
|
|
@@ -198,7 +199,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 198 |
<td id="T_0910c_row4_col11" class="data row4 col11" >0.098225</td>
|
| 199 |
</tr>
|
| 200 |
<tr>
|
| 201 |
-
<th id="T_0910c_level0_row5" class="row_heading level0 row5" >
|
| 202 |
<td id="T_0910c_row5_col0" class="data row5 col0" >0.121020</td>
|
| 203 |
<td id="T_0910c_row5_col1" class="data row5 col1" >0.142813</td>
|
| 204 |
<td id="T_0910c_row5_col2" class="data row5 col2" >0.131950</td>
|
|
@@ -213,7 +214,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 213 |
<td id="T_0910c_row5_col11" class="data row5 col11" >3.565510</td>
|
| 214 |
</tr>
|
| 215 |
<tr>
|
| 216 |
-
<th id="T_0910c_level0_row6" class="row_heading level0 row6" >
|
| 217 |
<td id="T_0910c_row6_col0" class="data row6 col0" >0.103694</td>
|
| 218 |
<td id="T_0910c_row6_col1" class="data row6 col1" >0.132025</td>
|
| 219 |
<td id="T_0910c_row6_col2" class="data row6 col2" >0.145878</td>
|
|
@@ -228,7 +229,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 228 |
<td id="T_0910c_row6_col11" class="data row6 col11" >0.238879</td>
|
| 229 |
</tr>
|
| 230 |
<tr>
|
| 231 |
-
<th id="T_0910c_level0_row7" class="row_heading level0 row7" >
|
| 232 |
<td id="T_0910c_row7_col0" class="data row7 col0" >0.118693</td>
|
| 233 |
<td id="T_0910c_row7_col1" class="data row7 col1" >0.134808</td>
|
| 234 |
<td id="T_0910c_row7_col2" class="data row7 col2" >0.155110</td>
|
|
@@ -243,7 +244,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 243 |
<td id="T_0910c_row7_col11" class="data row7 col11" >1.805643</td>
|
| 244 |
</tr>
|
| 245 |
<tr>
|
| 246 |
-
<th id="T_0910c_level0_row8" class="row_heading level0 row8" >
|
| 247 |
<td id="T_0910c_row8_col0" class="data row8 col0" >0.106150</td>
|
| 248 |
<td id="T_0910c_row8_col1" class="data row8 col1" >0.112360</td>
|
| 249 |
<td id="T_0910c_row8_col2" class="data row8 col2" >0.147258</td>
|
|
@@ -263,8 +264,7 @@ MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlis
|
|
| 263 |
|
| 264 |
**Better Instruction Following and Audio Understanding**
|
| 265 |
|
| 266 |
-
|
| 267 |
-
MERaLiON-2-10B has demonstrated significant improvement across the speech understanding, audio understanding, and paralinguistic tasks. Specifically, MERaLiON-2-10B is able to handle more complicated instructions and answer with more flexibility, minimizing the lost of Gemma's pre-trained knowledge during the audio finetuning process. This allows MERaLiON-2-10B to provide more detailed explaination to queries about the speech content or speaker's emotion status. With further adjustment of the text prompt, it can play different roles like voice assistant, virtual caregiver, or become part of sophisticated multi-AI agent system and software solutions.
|
| 268 |
|
| 269 |
<style type="text/css">
|
| 270 |
#T_b6ba8 th {
|
|
@@ -272,7 +272,6 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 272 |
}
|
| 273 |
#T_b6ba8_row0_col0, #T_b6ba8_row2_col0, #T_b6ba8_row3_col0, #T_b6ba8_row5_col0, #T_b6ba8_row6_col0, #T_b6ba8_row8_col0, #T_b6ba8_row9_col0, #T_b6ba8_row10_col0 {
|
| 274 |
text-align: center;
|
| 275 |
-
background-color: #06a2a2;
|
| 276 |
}
|
| 277 |
#T_b6ba8_row0_col1, #T_b6ba8_row0_col2, #T_b6ba8_row0_col3, #T_b6ba8_row0_col4, #T_b6ba8_row0_col5, #T_b6ba8_row0_col6, #T_b6ba8_row0_col7, #T_b6ba8_row0_col8, #T_b6ba8_row0_col9, #T_b6ba8_row0_col11, #T_b6ba8_row0_col12, #T_b6ba8_row0_col13, #T_b6ba8_row1_col1, #T_b6ba8_row1_col2, #T_b6ba8_row1_col3, #T_b6ba8_row1_col4, #T_b6ba8_row1_col5, #T_b6ba8_row1_col6, #T_b6ba8_row1_col7, #T_b6ba8_row1_col8, #T_b6ba8_row1_col9, #T_b6ba8_row1_col10, #T_b6ba8_row1_col11, #T_b6ba8_row1_col12, #T_b6ba8_row1_col13, #T_b6ba8_row2_col2, #T_b6ba8_row2_col3, #T_b6ba8_row2_col4, #T_b6ba8_row2_col5, #T_b6ba8_row2_col6, #T_b6ba8_row2_col7, #T_b6ba8_row2_col8, #T_b6ba8_row2_col9, #T_b6ba8_row2_col10, #T_b6ba8_row2_col11, #T_b6ba8_row2_col12, #T_b6ba8_row2_col13, #T_b6ba8_row3_col1, #T_b6ba8_row3_col3, #T_b6ba8_row3_col4, #T_b6ba8_row3_col5, #T_b6ba8_row3_col6, #T_b6ba8_row3_col7, #T_b6ba8_row3_col8, #T_b6ba8_row3_col9, #T_b6ba8_row3_col10, #T_b6ba8_row3_col11, #T_b6ba8_row3_col12, #T_b6ba8_row3_col13, #T_b6ba8_row4_col1, #T_b6ba8_row4_col2, #T_b6ba8_row4_col3, #T_b6ba8_row4_col4, #T_b6ba8_row4_col5, #T_b6ba8_row4_col6, #T_b6ba8_row4_col7, #T_b6ba8_row4_col8, #T_b6ba8_row4_col9, #T_b6ba8_row4_col10, #T_b6ba8_row4_col11, #T_b6ba8_row4_col12, #T_b6ba8_row4_col13, #T_b6ba8_row5_col1, #T_b6ba8_row5_col2, #T_b6ba8_row5_col3, #T_b6ba8_row5_col5, #T_b6ba8_row5_col6, #T_b6ba8_row5_col7, #T_b6ba8_row5_col8, #T_b6ba8_row5_col9, #T_b6ba8_row5_col10, #T_b6ba8_row5_col11, #T_b6ba8_row5_col12, #T_b6ba8_row5_col13, #T_b6ba8_row6_col1, #T_b6ba8_row6_col3, #T_b6ba8_row6_col4, #T_b6ba8_row6_col5, #T_b6ba8_row6_col6, #T_b6ba8_row6_col7, #T_b6ba8_row6_col8, #T_b6ba8_row6_col9, #T_b6ba8_row6_col10, #T_b6ba8_row6_col11, #T_b6ba8_row6_col12, #T_b6ba8_row6_col13, #T_b6ba8_row7_col1, #T_b6ba8_row7_col2, #T_b6ba8_row7_col3, #T_b6ba8_row7_col4, #T_b6ba8_row7_col5, #T_b6ba8_row7_col6, #T_b6ba8_row7_col7, #T_b6ba8_row7_col8, #T_b6ba8_row7_col9, #T_b6ba8_row7_col10, #T_b6ba8_row7_col11, #T_b6ba8_row7_col12, #T_b6ba8_row7_col13, #T_b6ba8_row8_col1, #T_b6ba8_row8_col2, #T_b6ba8_row8_col3, #T_b6ba8_row8_col4, #T_b6ba8_row8_col6, #T_b6ba8_row8_col7, #T_b6ba8_row8_col8, #T_b6ba8_row8_col9, #T_b6ba8_row8_col10, #T_b6ba8_row8_col11, #T_b6ba8_row8_col12, #T_b6ba8_row8_col13, #T_b6ba8_row9_col1, #T_b6ba8_row9_col2, #T_b6ba8_row9_col4, #T_b6ba8_row9_col5, #T_b6ba8_row9_col6, #T_b6ba8_row9_col7, #T_b6ba8_row9_col8, #T_b6ba8_row9_col9, #T_b6ba8_row9_col10, #T_b6ba8_row9_col11, #T_b6ba8_row9_col12, #T_b6ba8_row9_col13, #T_b6ba8_row10_col1, #T_b6ba8_row10_col3, #T_b6ba8_row10_col4, #T_b6ba8_row10_col5, #T_b6ba8_row10_col6, #T_b6ba8_row10_col7, #T_b6ba8_row10_col8, #T_b6ba8_row10_col9, #T_b6ba8_row10_col10, #T_b6ba8_row10_col11, #T_b6ba8_row10_col12, #T_b6ba8_row10_col13 {
|
| 278 |
text-align: center;
|
|
@@ -286,7 +285,6 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 286 |
font-weight: bold;
|
| 287 |
text-decoration: underline;
|
| 288 |
text-align: center;
|
| 289 |
-
background-color: #06a2a2;
|
| 290 |
}
|
| 291 |
</style>
|
| 292 |
<table id="T_b6ba8">
|
|
@@ -311,7 +309,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 311 |
</thead>
|
| 312 |
<tbody>
|
| 313 |
<tr>
|
| 314 |
-
<th id="T_b6ba8_level0_row0" class="row_heading level0 row0" >
|
| 315 |
<td id="T_b6ba8_row0_col0" class="data row0 col0" >70.200000</td>
|
| 316 |
<td id="T_b6ba8_row0_col1" class="data row0 col1" >70.800000</td>
|
| 317 |
<td id="T_b6ba8_row0_col2" class="data row0 col2" >13.400000</td>
|
|
@@ -328,7 +326,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 328 |
<td id="T_b6ba8_row0_col13" class="data row0 col13" >20.400000</td>
|
| 329 |
</tr>
|
| 330 |
<tr>
|
| 331 |
-
<th id="T_b6ba8_level0_row1" class="row_heading level0 row1" >
|
| 332 |
<td id="T_b6ba8_row1_col0" class="data row1 col0" >63.736268</td>
|
| 333 |
<td id="T_b6ba8_row1_col1" class="data row1 col1" >48.577313</td>
|
| 334 |
<td id="T_b6ba8_row1_col2" class="data row1 col2" >53.693298</td>
|
|
@@ -345,7 +343,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 345 |
<td id="T_b6ba8_row1_col13" class="data row1 col13" >50.801545</td>
|
| 346 |
</tr>
|
| 347 |
<tr>
|
| 348 |
-
<th id="T_b6ba8_level0_row2" class="row_heading level0 row2" >
|
| 349 |
<td id="T_b6ba8_row2_col0" class="data row2 col0" >51.140374</td>
|
| 350 |
<td id="T_b6ba8_row2_col1" class="data row2 col1" >52.207756</td>
|
| 351 |
<td id="T_b6ba8_row2_col2" class="data row2 col2" >49.511886</td>
|
|
@@ -362,7 +360,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 362 |
<td id="T_b6ba8_row2_col13" class="data row2 col13" >33.034083</td>
|
| 363 |
</tr>
|
| 364 |
<tr>
|
| 365 |
-
<th id="T_b6ba8_level0_row3" class="row_heading level0 row3" >
|
| 366 |
<td id="T_b6ba8_row3_col0" class="data row3 col0" >95.109423</td>
|
| 367 |
<td id="T_b6ba8_row3_col1" class="data row3 col1" >97.177396</td>
|
| 368 |
<td id="T_b6ba8_row3_col2" class="data row3 col2" >97.220335</td>
|
|
@@ -379,7 +377,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 379 |
<td id="T_b6ba8_row3_col13" class="data row3 col13" >60.773275</td>
|
| 380 |
</tr>
|
| 381 |
<tr>
|
| 382 |
-
<th id="T_b6ba8_level0_row4" class="row_heading level0 row4" >
|
| 383 |
<td id="T_b6ba8_row4_col0" class="data row4 col0" >66.550000</td>
|
| 384 |
<td id="T_b6ba8_row4_col1" class="data row4 col1" >58.900000</td>
|
| 385 |
<td id="T_b6ba8_row4_col2" class="data row4 col2" >61.850000</td>
|
|
@@ -396,7 +394,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 396 |
<td id="T_b6ba8_row4_col13" class="data row4 col13" >51.200000</td>
|
| 397 |
</tr>
|
| 398 |
<tr>
|
| 399 |
-
<th id="T_b6ba8_level0_row5" class="row_heading level0 row5" >
|
| 400 |
<td id="T_b6ba8_row5_col0" class="data row5 col0" >35.604270</td>
|
| 401 |
<td id="T_b6ba8_row5_col1" class="data row5 col1" >36.976419</td>
|
| 402 |
<td id="T_b6ba8_row5_col2" class="data row5 col2" >34.466710</td>
|
|
@@ -413,7 +411,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 413 |
<td id="T_b6ba8_row5_col13" class="data row5 col13" >6.200867</td>
|
| 414 |
</tr>
|
| 415 |
<tr>
|
| 416 |
-
<th id="T_b6ba8_level0_row6" class="row_heading level0 row6" >
|
| 417 |
<td id="T_b6ba8_row6_col0" class="data row6 col0" >53.100000</td>
|
| 418 |
<td id="T_b6ba8_row6_col1" class="data row6 col1" >53.600000</td>
|
| 419 |
<td id="T_b6ba8_row6_col2" class="data row6 col2" >55.800000</td>
|
|
@@ -430,7 +428,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 430 |
<td id="T_b6ba8_row6_col13" class="data row6 col13" >39.450000</td>
|
| 431 |
</tr>
|
| 432 |
<tr>
|
| 433 |
-
<th id="T_b6ba8_level0_row7" class="row_heading level0 row7" >
|
| 434 |
<td id="T_b6ba8_row7_col0" class="data row7 col0" >79.735049</td>
|
| 435 |
<td id="T_b6ba8_row7_col1" class="data row7 col1" >63.711481</td>
|
| 436 |
<td id="T_b6ba8_row7_col2" class="data row7 col2" >73.975834</td>
|
|
@@ -447,7 +445,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 447 |
<td id="T_b6ba8_row7_col13" class="data row7 col13" >70.595242</td>
|
| 448 |
</tr>
|
| 449 |
<tr>
|
| 450 |
-
<th id="T_b6ba8_level0_row8" class="row_heading level0 row8" >
|
| 451 |
<td id="T_b6ba8_row8_col0" class="data row8 col0" >63.942713</td>
|
| 452 |
<td id="T_b6ba8_row8_col1" class="data row8 col1" >51.347936</td>
|
| 453 |
<td id="T_b6ba8_row8_col2" class="data row8 col2" >60.657119</td>
|
|
@@ -464,7 +462,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 464 |
<td id="T_b6ba8_row8_col13" class="data row8 col13" >44.313395</td>
|
| 465 |
</tr>
|
| 466 |
<tr>
|
| 467 |
-
<th id="T_b6ba8_level0_row9" class="row_heading level0 row9" >
|
| 468 |
<td id="T_b6ba8_row9_col0" class="data row9 col0" >41.815396</td>
|
| 469 |
<td id="T_b6ba8_row9_col1" class="data row9 col1" >43.799799</td>
|
| 470 |
<td id="T_b6ba8_row9_col2" class="data row9 col2" >47.788864</td>
|
|
@@ -481,7 +479,7 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 481 |
<td id="T_b6ba8_row9_col13" class="data row9 col13" >14.294613</td>
|
| 482 |
</tr>
|
| 483 |
<tr>
|
| 484 |
-
<th id="T_b6ba8_level0_row10" class="row_heading level0 row10" >
|
| 485 |
<td id="T_b6ba8_row10_col0" class="data row10 col0" >27.391115</td>
|
| 486 |
<td id="T_b6ba8_row10_col1" class="data row10 col1" >27.086366</td>
|
| 487 |
<td id="T_b6ba8_row10_col2" class="data row10 col2" >28.540359</td>
|
|
@@ -501,55 +499,99 @@ MERaLiON-2-10B has demonstrated significant improvement across the speech unders
|
|
| 501 |
</table>
|
| 502 |
|
| 503 |
|
| 504 |
-
##
|
| 505 |
> [!WARNING]
|
| 506 |
> **Out of Scope use**: This model is not intended for use in tool calling, math, and coding tasks.
|
| 507 |
|
| 508 |
-
### Requirements
|
| 509 |
-
We suggest using Python version, transformers version, PyTorch version. See GitHub() for installation instructions.
|
| 510 |
|
| 511 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 512 |
|
| 513 |
-
**Audio**
|
| 514 |
-
- To keep the stable performance, the maximum audio length is suggested to be 300 seconds at 16,000 Hz sampling rate.
|
| 515 |
- For ASR tasks, the maximum audio length is suggested to be 30 seconds at 16,000 Hz.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 516 |
|
| 517 |
-
|
| 518 |
-
|
| 519 |
-
|
| 520 |
-
|
| 521 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 522 |
|
| 523 |
-
|
| 524 |
-
|
| 525 |
-
|
| 526 |
-
|
| 527 |
-
|
| 528 |
-
</pre>
|
| 529 |
|
| 530 |
-
|
| 531 |
-
|
| 532 |
-
|
| 533 |
-
|
|
|
|
| 534 |
|
| 535 |
-
|
| 536 |
-
|
|
|
|
|
|
|
| 537 |
|
| 538 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 539 |
|
| 540 |
-
|
| 541 |
-
<summary>Click to view details</summary>
|
| 542 |
|
| 543 |
```python
|
| 544 |
import torch
|
| 545 |
import librosa
|
| 546 |
-
from concurrent.futures import ThreadPoolExecutor
|
| 547 |
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
|
| 548 |
|
| 549 |
-
repo_id = "MERaLiON/MERaLiON-2-10B"
|
| 550 |
device = "cuda"
|
| 551 |
|
| 552 |
-
# Load the processor and model
|
| 553 |
processor = AutoProcessor.from_pretrained(
|
| 554 |
repo_id,
|
| 555 |
trust_remote_code=True,
|
|
@@ -562,14 +604,14 @@ model = AutoModelForSpeechSeq2Seq.from_pretrained(
|
|
| 562 |
torch_dtype=torch.bfloat16
|
| 563 |
).to(device)
|
| 564 |
|
| 565 |
-
|
| 566 |
-
|
| 567 |
-
|
| 568 |
|
|
|
|
| 569 |
conversation = [
|
| 570 |
-
[{"role": "user", "content":
|
| 571 |
-
|
| 572 |
-
for prompt in query_list
|
| 573 |
]
|
| 574 |
|
| 575 |
chat_prompt = processor.tokenizer.apply_chat_template(
|
|
@@ -578,35 +620,23 @@ chat_prompt = processor.tokenizer.apply_chat_template(
|
|
| 578 |
add_generation_prompt=True
|
| 579 |
)
|
| 580 |
|
| 581 |
-
#
|
| 582 |
-
|
| 583 |
-
|
| 584 |
-
|
| 585 |
-
return audio
|
| 586 |
-
|
| 587 |
-
audio_paths = ["/path/to/audio1.wav", "/path/to/audio2.wav", "..."]
|
| 588 |
-
with ThreadPoolExecutor() as executor:
|
| 589 |
-
audio_array = list(executor.map(load_audio, audio_paths))
|
| 590 |
|
| 591 |
-
|
| 592 |
-
|
| 593 |
-
|
| 594 |
|
| 595 |
-
|
| 596 |
-
inputs =
|
| 597 |
|
| 598 |
-
#
|
| 599 |
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 600 |
generated_ids = outputs[:, inputs['input_ids'].size(1):]
|
| 601 |
response = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
| 602 |
-
print(response)
|
| 603 |
-
|
| 604 |
```
|
| 605 |
-
</details>
|
| 606 |
-
|
| 607 |
-
### vLLM inference
|
| 608 |
-
To maximize throughput for long-form audio-text interactions, we support inference using vLLM. Please refer to the GitHub instructions for vLLM-specific setup and deployment scripts.
|
| 609 |
-
|
| 610 |
|
| 611 |
## ⚠️ Disclaimer
|
| 612 |
|
|
|
|
| 32 |
</p>
|
| 33 |
|
| 34 |
## Introduction
|
| 35 |
+
We are pleased to announce the release of **MERaLiON2**, the latest addition to the MERaLiON family of speech-text large language models. Our flagship model, [**MERaLiON-2-10B**](https://huggingface.co/MERaLiON/MERaLiON-2-10B), demonstrates competitive performance across benchmark evaluations in tasks such as multilingual automatic speech recognition (ASR), speech translation (ST), audio scene understanding, emotion recognition, and general speech comprehension. These results are comparable to those achieved by other state-of-the-art open-source AudioLLMs, including Qwen2.5-Omni-7B and Phi-4-multimodal-instruct.
|
| 36 |
|
| 37 |
+
MERaLiON-2-10B is specifically designed to follow complex instructions with a nuanced understanding of **Singapore’s multilingual and multicultural context**. It integrates a localized Whisper-large-v3 speech encoder and Gemma-2-9b text decoder. The following graph presents task-specific evaluation scores, assessed using the **LLM-as-a-Judge** framework across multiple datasets. For the speech translation task, performance is measured using the BLEU metric, where higher scores indicate better translation quality.
|
|
|
|
| 38 |
|
| 39 |
<img src="radar_task.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 40 |
|
| 41 |
+
|
| 42 |
+
In addition, we introduce an ASR-optimized variant, [**MERaLiON-2-10B-ASR**](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR), which delivers a **5–30%** performance improvement over OpenAI’s `whisper-large-v3` on speech recognition tasks. This enhancement spans Singapore’s 4 official languages—**English**, **Mandarin**, **Malay**, and **Tamil**—as well as 3 South-East Asian languages: **Indonesian**, **Thai**, and **Vietnamese**. The model also demonstrates robust handling of **code-switching scenarios** and local colloquialisms, reflecting its adaptability to Singapore’s diverse linguistic landscape.
|
| 43 |
+
|
| 44 |
+
The following visualization illustrates the **1 - Word Error Rate (WER)** metric across these seven languages, comparing MERaLiON-2-10B-ASR with other leading models. A higher value indicates better transcription accuracy.
|
| 45 |
|
| 46 |
<img src="radar_asr.png" alt="model_capability" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 47 |
|
|
|
|
| 50 |
|
| 51 |
- **Extended Audio Length**: Support audio inputs up to 300 seconds (5 minutes) for audio & speech question answering tasks, **30s for a satisfactory performance for speech transcription (ASR) and speech translation (ST) tasks**.
|
| 52 |
|
| 53 |
+
- **Expanded Language Coverage**: In addition to English, Chinese, and Singlish, V2 introduces support for Malay, Tamil, and other South-East Asia languages including Indonesian, Thai, and Vietnamese.
|
| 54 |
|
| 55 |
+
- **Improved Performance**: Achieves higher performance across a wide range of tasks. See the [Evaluation](#performance) section for detailed benchmarks.
|
| 56 |
|
| 57 |
- **Higher Quality Training Data**: Trained on 120,000 hours of curated speech and audio data, filtered for quality and diversity, with an emphasis on local and multilingual audio sources.
|
| 58 |
|
| 59 |
- **Three Model Variants**: Available in general-purpose ([MERaLiON-2-10B](https://huggingface.co/MERaLiON/MERaLiON-2-10B)), ASR-optimized ([MERaLiON-2-10B-ASR](https://huggingface.co/MERaLiON/MERaLiON-2-10B-ASR)) and light-weight ([MERaLiON-2-3B](https://huggingface.co/MERaLiON/MERaLiON-2-3B)) configurations to balance latency, compute efficiency, and task performance across different deployment needs.
|
| 60 |
|
| 61 |
+
## Model Description:
|
| 62 |
|
| 63 |
MERaLiON stands for **M**ultimodal **E**mpathetic **R**easoning **a**nd **L**earning **i**n **O**ne **N**etwork.
|
| 64 |
|
|
|
|
| 79 |
**MERaLiON-2** is an upgraded version of [MERaLiON-AudioLLM](https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION).
|
| 80 |
|
| 81 |
|
| 82 |
+
## Performance:
|
| 83 |
|
| 84 |
+
We benchmark MERaLiON-2 series models with extended [AudioBench benchmark](https://huggingface.co/spaces/MERaLiON/AudioBench-Leaderboard) against several recently released open-source multimodal models — SALMONN-7B, Qwen2.5-Omni series and Phi-4-Multimodal — as well as two cascade model.
|
| 85 |
|
| 86 |
|
| 87 |
**Better Automatic Speech Recognition (ASR) Accuracy**
|
| 88 |
|
| 89 |
+
MERaLiON-2-10B-ASR and MERaLiON-2-10B demonstrate leading performance in Singlish, Mandarin, Malay, Tamil, and other Southeast Asian languages, while maintaining competitive results in English compared to `Whisper-large-v3`. The following table shows the average transcription `Word Error Rate` by language for the MERaLiON family and other leading AudioLLMs. The `Private Dataset` includes a collection of Singapore's locally accented speeches with code-switch.
|
| 90 |
|
| 91 |
<style type="text/css">
|
| 92 |
#T_0910c th {
|
|
|
|
| 99 |
}
|
| 100 |
#T_0910c_row0_col1, #T_0910c_row1_col1, #T_0910c_row2_col1, #T_0910c_row3_col1, #T_0910c_row4_col1, #T_0910c_row5_col1, #T_0910c_row6_col1, #T_0910c_row7_col1, #T_0910c_row8_col1 {
|
| 101 |
text-align: center;
|
|
|
|
| 102 |
}
|
| 103 |
#T_0910c_row0_col2, #T_0910c_row0_col3, #T_0910c_row0_col4, #T_0910c_row0_col5, #T_0910c_row0_col6, #T_0910c_row0_col7, #T_0910c_row0_col8, #T_0910c_row0_col9, #T_0910c_row0_col10, #T_0910c_row0_col11, #T_0910c_row1_col2, #T_0910c_row1_col3, #T_0910c_row1_col4, #T_0910c_row1_col5, #T_0910c_row1_col6, #T_0910c_row1_col7, #T_0910c_row1_col8, #T_0910c_row1_col9, #T_0910c_row1_col10, #T_0910c_row1_col11, #T_0910c_row2_col2, #T_0910c_row2_col3, #T_0910c_row2_col4, #T_0910c_row2_col5, #T_0910c_row2_col6, #T_0910c_row2_col7, #T_0910c_row2_col8, #T_0910c_row2_col9, #T_0910c_row2_col10, #T_0910c_row2_col11, #T_0910c_row3_col2, #T_0910c_row3_col3, #T_0910c_row3_col4, #T_0910c_row3_col5, #T_0910c_row3_col6, #T_0910c_row3_col7, #T_0910c_row3_col8, #T_0910c_row3_col9, #T_0910c_row3_col10, #T_0910c_row3_col11, #T_0910c_row4_col2, #T_0910c_row4_col3, #T_0910c_row4_col4, #T_0910c_row4_col5, #T_0910c_row4_col6, #T_0910c_row4_col7, #T_0910c_row4_col8, #T_0910c_row4_col9, #T_0910c_row4_col10, #T_0910c_row4_col11, #T_0910c_row5_col2, #T_0910c_row5_col3, #T_0910c_row5_col4, #T_0910c_row5_col5, #T_0910c_row5_col6, #T_0910c_row5_col7, #T_0910c_row5_col8, #T_0910c_row5_col9, #T_0910c_row5_col10, #T_0910c_row5_col11, #T_0910c_row6_col0, #T_0910c_row6_col2, #T_0910c_row6_col3, #T_0910c_row6_col4, #T_0910c_row6_col5, #T_0910c_row6_col6, #T_0910c_row6_col8, #T_0910c_row6_col9, #T_0910c_row6_col10, #T_0910c_row6_col11, #T_0910c_row7_col2, #T_0910c_row7_col3, #T_0910c_row7_col4, #T_0910c_row7_col5, #T_0910c_row7_col6, #T_0910c_row7_col7, #T_0910c_row7_col8, #T_0910c_row7_col9, #T_0910c_row7_col10, #T_0910c_row7_col11, #T_0910c_row8_col2, #T_0910c_row8_col3, #T_0910c_row8_col4, #T_0910c_row8_col5, #T_0910c_row8_col6, #T_0910c_row8_col7, #T_0910c_row8_col8, #T_0910c_row8_col9, #T_0910c_row8_col10, #T_0910c_row8_col11 {
|
| 104 |
text-align: center;
|
|
|
|
| 124 |
</thead>
|
| 125 |
<tbody>
|
| 126 |
<tr>
|
| 127 |
+
<th id="T_0910c_level0_row0" class="row_heading level0 row0" >Thai</th>
|
| 128 |
<td id="T_0910c_row0_col0" class="data row0 col0" >0.096526</td>
|
| 129 |
<td id="T_0910c_row0_col1" class="data row0 col1" >0.109365</td>
|
| 130 |
<td id="T_0910c_row0_col2" class="data row0 col2" >0.107279</td>
|
|
|
|
| 139 |
<td id="T_0910c_row0_col11" class="data row0 col11" >1.510068</td>
|
| 140 |
</tr>
|
| 141 |
<tr>
|
| 142 |
+
<th id="T_0910c_level0_row1" class="row_heading level0 row1" >Tamil</th>
|
| 143 |
<td id="T_0910c_row1_col0" class="data row1 col0" >0.271279</td>
|
| 144 |
<td id="T_0910c_row1_col1" class="data row1 col1" >0.327081</td>
|
| 145 |
<td id="T_0910c_row1_col2" class="data row1 col2" >0.344081</td>
|
|
|
|
| 154 |
<td id="T_0910c_row1_col11" class="data row1 col11" >1.876722</td>
|
| 155 |
</tr>
|
| 156 |
<tr>
|
| 157 |
+
<th id="T_0910c_level0_row2" class="row_heading level0 row2" >Singlish</th>
|
| 158 |
<td id="T_0910c_row2_col0" class="data row2 col0" >0.129830</td>
|
| 159 |
<td id="T_0910c_row2_col1" class="data row2 col1" >0.168813</td>
|
| 160 |
<td id="T_0910c_row2_col2" class="data row2 col2" >0.180395</td>
|
|
|
|
| 169 |
<td id="T_0910c_row2_col11" class="data row2 col11" >0.448863</td>
|
| 170 |
</tr>
|
| 171 |
<tr>
|
| 172 |
+
<th id="T_0910c_level0_row3" class="row_heading level0 row3" >Malay</th>
|
| 173 |
<td id="T_0910c_row3_col0" class="data row3 col0" >0.194638</td>
|
| 174 |
<td id="T_0910c_row3_col1" class="data row3 col1" >0.209074</td>
|
| 175 |
<td id="T_0910c_row3_col2" class="data row3 col2" >0.279891</td>
|
|
|
|
| 184 |
<td id="T_0910c_row3_col11" class="data row3 col11" >3.762933</td>
|
| 185 |
</tr>
|
| 186 |
<tr>
|
| 187 |
+
<th id="T_0910c_level0_row4" class="row_heading level0 row4" >English</th>
|
| 188 |
<td id="T_0910c_row4_col0" class="data row4 col0" >0.078544</td>
|
| 189 |
<td id="T_0910c_row4_col1" class="data row4 col1" >0.088259</td>
|
| 190 |
<td id="T_0910c_row4_col2" class="data row4 col2" >0.122295</td>
|
|
|
|
| 199 |
<td id="T_0910c_row4_col11" class="data row4 col11" >0.098225</td>
|
| 200 |
</tr>
|
| 201 |
<tr>
|
| 202 |
+
<th id="T_0910c_level0_row5" class="row_heading level0 row5" >Indonesian</th>
|
| 203 |
<td id="T_0910c_row5_col0" class="data row5 col0" >0.121020</td>
|
| 204 |
<td id="T_0910c_row5_col1" class="data row5 col1" >0.142813</td>
|
| 205 |
<td id="T_0910c_row5_col2" class="data row5 col2" >0.131950</td>
|
|
|
|
| 214 |
<td id="T_0910c_row5_col11" class="data row5 col11" >3.565510</td>
|
| 215 |
</tr>
|
| 216 |
<tr>
|
| 217 |
+
<th id="T_0910c_level0_row6" class="row_heading level0 row6" >Mandarian</th>
|
| 218 |
<td id="T_0910c_row6_col0" class="data row6 col0" >0.103694</td>
|
| 219 |
<td id="T_0910c_row6_col1" class="data row6 col1" >0.132025</td>
|
| 220 |
<td id="T_0910c_row6_col2" class="data row6 col2" >0.145878</td>
|
|
|
|
| 229 |
<td id="T_0910c_row6_col11" class="data row6 col11" >0.238879</td>
|
| 230 |
</tr>
|
| 231 |
<tr>
|
| 232 |
+
<th id="T_0910c_level0_row7" class="row_heading level0 row7" >Vietnamese</th>
|
| 233 |
<td id="T_0910c_row7_col0" class="data row7 col0" >0.118693</td>
|
| 234 |
<td id="T_0910c_row7_col1" class="data row7 col1" >0.134808</td>
|
| 235 |
<td id="T_0910c_row7_col2" class="data row7 col2" >0.155110</td>
|
|
|
|
| 244 |
<td id="T_0910c_row7_col11" class="data row7 col11" >1.805643</td>
|
| 245 |
</tr>
|
| 246 |
<tr>
|
| 247 |
+
<th id="T_0910c_level0_row8" class="row_heading level0 row8" >Private Dataset</th>
|
| 248 |
<td id="T_0910c_row8_col0" class="data row8 col0" >0.106150</td>
|
| 249 |
<td id="T_0910c_row8_col1" class="data row8 col1" >0.112360</td>
|
| 250 |
<td id="T_0910c_row8_col2" class="data row8 col2" >0.147258</td>
|
|
|
|
| 264 |
|
| 265 |
**Better Instruction Following and Audio Understanding**
|
| 266 |
|
| 267 |
+
**MERaLiON-2-10B** exhibits substantial advancements in speech and audio understanding, as well as paralinguistic tasks. Notably, it adeptly handles complex instructions and responds with enhanced flexibility, effectively preserving the pre-trained knowledge from Gemma during the audio fine-tuning process. This capability enables MERaLiON-2-10B to provide detailed explanations regarding speech content and the speaker's emotional state. Furthermore, with appropriate prompt adjustments, the model can assume various roles, such as a voice assistant, virtual caregiver, or an integral component of sophisticated multi-agent AI systems and software solutions.
|
|
|
|
| 268 |
|
| 269 |
<style type="text/css">
|
| 270 |
#T_b6ba8 th {
|
|
|
|
| 272 |
}
|
| 273 |
#T_b6ba8_row0_col0, #T_b6ba8_row2_col0, #T_b6ba8_row3_col0, #T_b6ba8_row5_col0, #T_b6ba8_row6_col0, #T_b6ba8_row8_col0, #T_b6ba8_row9_col0, #T_b6ba8_row10_col0 {
|
| 274 |
text-align: center;
|
|
|
|
| 275 |
}
|
| 276 |
#T_b6ba8_row0_col1, #T_b6ba8_row0_col2, #T_b6ba8_row0_col3, #T_b6ba8_row0_col4, #T_b6ba8_row0_col5, #T_b6ba8_row0_col6, #T_b6ba8_row0_col7, #T_b6ba8_row0_col8, #T_b6ba8_row0_col9, #T_b6ba8_row0_col11, #T_b6ba8_row0_col12, #T_b6ba8_row0_col13, #T_b6ba8_row1_col1, #T_b6ba8_row1_col2, #T_b6ba8_row1_col3, #T_b6ba8_row1_col4, #T_b6ba8_row1_col5, #T_b6ba8_row1_col6, #T_b6ba8_row1_col7, #T_b6ba8_row1_col8, #T_b6ba8_row1_col9, #T_b6ba8_row1_col10, #T_b6ba8_row1_col11, #T_b6ba8_row1_col12, #T_b6ba8_row1_col13, #T_b6ba8_row2_col2, #T_b6ba8_row2_col3, #T_b6ba8_row2_col4, #T_b6ba8_row2_col5, #T_b6ba8_row2_col6, #T_b6ba8_row2_col7, #T_b6ba8_row2_col8, #T_b6ba8_row2_col9, #T_b6ba8_row2_col10, #T_b6ba8_row2_col11, #T_b6ba8_row2_col12, #T_b6ba8_row2_col13, #T_b6ba8_row3_col1, #T_b6ba8_row3_col3, #T_b6ba8_row3_col4, #T_b6ba8_row3_col5, #T_b6ba8_row3_col6, #T_b6ba8_row3_col7, #T_b6ba8_row3_col8, #T_b6ba8_row3_col9, #T_b6ba8_row3_col10, #T_b6ba8_row3_col11, #T_b6ba8_row3_col12, #T_b6ba8_row3_col13, #T_b6ba8_row4_col1, #T_b6ba8_row4_col2, #T_b6ba8_row4_col3, #T_b6ba8_row4_col4, #T_b6ba8_row4_col5, #T_b6ba8_row4_col6, #T_b6ba8_row4_col7, #T_b6ba8_row4_col8, #T_b6ba8_row4_col9, #T_b6ba8_row4_col10, #T_b6ba8_row4_col11, #T_b6ba8_row4_col12, #T_b6ba8_row4_col13, #T_b6ba8_row5_col1, #T_b6ba8_row5_col2, #T_b6ba8_row5_col3, #T_b6ba8_row5_col5, #T_b6ba8_row5_col6, #T_b6ba8_row5_col7, #T_b6ba8_row5_col8, #T_b6ba8_row5_col9, #T_b6ba8_row5_col10, #T_b6ba8_row5_col11, #T_b6ba8_row5_col12, #T_b6ba8_row5_col13, #T_b6ba8_row6_col1, #T_b6ba8_row6_col3, #T_b6ba8_row6_col4, #T_b6ba8_row6_col5, #T_b6ba8_row6_col6, #T_b6ba8_row6_col7, #T_b6ba8_row6_col8, #T_b6ba8_row6_col9, #T_b6ba8_row6_col10, #T_b6ba8_row6_col11, #T_b6ba8_row6_col12, #T_b6ba8_row6_col13, #T_b6ba8_row7_col1, #T_b6ba8_row7_col2, #T_b6ba8_row7_col3, #T_b6ba8_row7_col4, #T_b6ba8_row7_col5, #T_b6ba8_row7_col6, #T_b6ba8_row7_col7, #T_b6ba8_row7_col8, #T_b6ba8_row7_col9, #T_b6ba8_row7_col10, #T_b6ba8_row7_col11, #T_b6ba8_row7_col12, #T_b6ba8_row7_col13, #T_b6ba8_row8_col1, #T_b6ba8_row8_col2, #T_b6ba8_row8_col3, #T_b6ba8_row8_col4, #T_b6ba8_row8_col6, #T_b6ba8_row8_col7, #T_b6ba8_row8_col8, #T_b6ba8_row8_col9, #T_b6ba8_row8_col10, #T_b6ba8_row8_col11, #T_b6ba8_row8_col12, #T_b6ba8_row8_col13, #T_b6ba8_row9_col1, #T_b6ba8_row9_col2, #T_b6ba8_row9_col4, #T_b6ba8_row9_col5, #T_b6ba8_row9_col6, #T_b6ba8_row9_col7, #T_b6ba8_row9_col8, #T_b6ba8_row9_col9, #T_b6ba8_row9_col10, #T_b6ba8_row9_col11, #T_b6ba8_row9_col12, #T_b6ba8_row9_col13, #T_b6ba8_row10_col1, #T_b6ba8_row10_col3, #T_b6ba8_row10_col4, #T_b6ba8_row10_col5, #T_b6ba8_row10_col6, #T_b6ba8_row10_col7, #T_b6ba8_row10_col8, #T_b6ba8_row10_col9, #T_b6ba8_row10_col10, #T_b6ba8_row10_col11, #T_b6ba8_row10_col12, #T_b6ba8_row10_col13 {
|
| 277 |
text-align: center;
|
|
|
|
| 285 |
font-weight: bold;
|
| 286 |
text-decoration: underline;
|
| 287 |
text-align: center;
|
|
|
|
| 288 |
}
|
| 289 |
</style>
|
| 290 |
<table id="T_b6ba8">
|
|
|
|
| 309 |
</thead>
|
| 310 |
<tbody>
|
| 311 |
<tr>
|
| 312 |
+
<th id="T_b6ba8_level0_row0" class="row_heading level0 row0" >Speech Instruction</th>
|
| 313 |
<td id="T_b6ba8_row0_col0" class="data row0 col0" >70.200000</td>
|
| 314 |
<td id="T_b6ba8_row0_col1" class="data row0 col1" >70.800000</td>
|
| 315 |
<td id="T_b6ba8_row0_col2" class="data row0 col2" >13.400000</td>
|
|
|
|
| 326 |
<td id="T_b6ba8_row0_col13" class="data row0 col13" >20.400000</td>
|
| 327 |
</tr>
|
| 328 |
<tr>
|
| 329 |
+
<th id="T_b6ba8_level0_row1" class="row_heading level0 row1" >Emotion Recognition</th>
|
| 330 |
<td id="T_b6ba8_row1_col0" class="data row1 col0" >63.736268</td>
|
| 331 |
<td id="T_b6ba8_row1_col1" class="data row1 col1" >48.577313</td>
|
| 332 |
<td id="T_b6ba8_row1_col2" class="data row1 col2" >53.693298</td>
|
|
|
|
| 343 |
<td id="T_b6ba8_row1_col13" class="data row1 col13" >50.801545</td>
|
| 344 |
</tr>
|
| 345 |
<tr>
|
| 346 |
+
<th id="T_b6ba8_level0_row2" class="row_heading level0 row2" >Audio Scene Question Answering</th>
|
| 347 |
<td id="T_b6ba8_row2_col0" class="data row2 col0" >51.140374</td>
|
| 348 |
<td id="T_b6ba8_row2_col1" class="data row2 col1" >52.207756</td>
|
| 349 |
<td id="T_b6ba8_row2_col2" class="data row2 col2" >49.511886</td>
|
|
|
|
| 360 |
<td id="T_b6ba8_row2_col13" class="data row2 col13" >33.034083</td>
|
| 361 |
</tr>
|
| 362 |
<tr>
|
| 363 |
+
<th id="T_b6ba8_level0_row3" class="row_heading level0 row3" >Gender Recognition</th>
|
| 364 |
<td id="T_b6ba8_row3_col0" class="data row3 col0" >95.109423</td>
|
| 365 |
<td id="T_b6ba8_row3_col1" class="data row3 col1" >97.177396</td>
|
| 366 |
<td id="T_b6ba8_row3_col2" class="data row3 col2" >97.220335</td>
|
|
|
|
| 377 |
<td id="T_b6ba8_row3_col13" class="data row3 col13" >60.773275</td>
|
| 378 |
</tr>
|
| 379 |
<tr>
|
| 380 |
+
<th id="T_b6ba8_level0_row4" class="row_heading level0 row4" >Spoken QA (Singlish)</th>
|
| 381 |
<td id="T_b6ba8_row4_col0" class="data row4 col0" >66.550000</td>
|
| 382 |
<td id="T_b6ba8_row4_col1" class="data row4 col1" >58.900000</td>
|
| 383 |
<td id="T_b6ba8_row4_col2" class="data row4 col2" >61.850000</td>
|
|
|
|
| 394 |
<td id="T_b6ba8_row4_col13" class="data row4 col13" >51.200000</td>
|
| 395 |
</tr>
|
| 396 |
<tr>
|
| 397 |
+
<th id="T_b6ba8_level0_row5" class="row_heading level0 row5" >Audio Captioning</th>
|
| 398 |
<td id="T_b6ba8_row5_col0" class="data row5 col0" >35.604270</td>
|
| 399 |
<td id="T_b6ba8_row5_col1" class="data row5 col1" >36.976419</td>
|
| 400 |
<td id="T_b6ba8_row5_col2" class="data row5 col2" >34.466710</td>
|
|
|
|
| 411 |
<td id="T_b6ba8_row5_col13" class="data row5 col13" >6.200867</td>
|
| 412 |
</tr>
|
| 413 |
<tr>
|
| 414 |
+
<th id="T_b6ba8_level0_row6" class="row_heading level0 row6" >Spoken Dialogue Summarisation</th>
|
| 415 |
<td id="T_b6ba8_row6_col0" class="data row6 col0" >53.100000</td>
|
| 416 |
<td id="T_b6ba8_row6_col1" class="data row6 col1" >53.600000</td>
|
| 417 |
<td id="T_b6ba8_row6_col2" class="data row6 col2" >55.800000</td>
|
|
|
|
| 428 |
<td id="T_b6ba8_row6_col13" class="data row6 col13" >39.450000</td>
|
| 429 |
</tr>
|
| 430 |
<tr>
|
| 431 |
+
<th id="T_b6ba8_level0_row7" class="row_heading level0 row7" >Spoken QA (English)</th>
|
| 432 |
<td id="T_b6ba8_row7_col0" class="data row7 col0" >79.735049</td>
|
| 433 |
<td id="T_b6ba8_row7_col1" class="data row7 col1" >63.711481</td>
|
| 434 |
<td id="T_b6ba8_row7_col2" class="data row7 col2" >73.975834</td>
|
|
|
|
| 445 |
<td id="T_b6ba8_row7_col13" class="data row7 col13" >70.595242</td>
|
| 446 |
</tr>
|
| 447 |
<tr>
|
| 448 |
+
<th id="T_b6ba8_level0_row8" class="row_heading level0 row8" >Music Understanding</th>
|
| 449 |
<td id="T_b6ba8_row8_col0" class="data row8 col0" >63.942713</td>
|
| 450 |
<td id="T_b6ba8_row8_col1" class="data row8 col1" >51.347936</td>
|
| 451 |
<td id="T_b6ba8_row8_col2" class="data row8 col2" >60.657119</td>
|
|
|
|
| 462 |
<td id="T_b6ba8_row8_col13" class="data row8 col13" >44.313395</td>
|
| 463 |
</tr>
|
| 464 |
<tr>
|
| 465 |
+
<th id="T_b6ba8_level0_row9" class="row_heading level0 row9" >Accent Recognition</th>
|
| 466 |
<td id="T_b6ba8_row9_col0" class="data row9 col0" >41.815396</td>
|
| 467 |
<td id="T_b6ba8_row9_col1" class="data row9 col1" >43.799799</td>
|
| 468 |
<td id="T_b6ba8_row9_col2" class="data row9 col2" >47.788864</td>
|
|
|
|
| 479 |
<td id="T_b6ba8_row9_col13" class="data row9 col13" >14.294613</td>
|
| 480 |
</tr>
|
| 481 |
<tr>
|
| 482 |
+
<th id="T_b6ba8_level0_row10" class="row_heading level0 row10" >Speech Translation</th>
|
| 483 |
<td id="T_b6ba8_row10_col0" class="data row10 col0" >27.391115</td>
|
| 484 |
<td id="T_b6ba8_row10_col1" class="data row10 col1" >27.086366</td>
|
| 485 |
<td id="T_b6ba8_row10_col2" class="data row10 col2" >28.540359</td>
|
|
|
|
| 499 |
</table>
|
| 500 |
|
| 501 |
|
| 502 |
+
## How to Use
|
| 503 |
> [!WARNING]
|
| 504 |
> **Out of Scope use**: This model is not intended for use in tool calling, math, and coding tasks.
|
| 505 |
|
|
|
|
|
|
|
| 506 |
|
| 507 |
+
MERaLiON-2 requires transformers version `4.50.1`
|
| 508 |
+
|
| 509 |
+
```
|
| 510 |
+
pip install transformers==4.50.1
|
| 511 |
+
```
|
| 512 |
+
|
| 513 |
+
### Audio Input
|
| 514 |
|
|
|
|
|
|
|
| 515 |
- For ASR tasks, the maximum audio length is suggested to be 30 seconds at 16,000 Hz.
|
| 516 |
+
- For general speech & audio understanding tasks, the maximum audio length is suggested to be 300 seconds at 16,000 Hz sampling rate.
|
| 517 |
+
|
| 518 |
+
### Text Prompt
|
| 519 |
+
|
| 520 |
+
MERaLiON-2 is trained with this prompt template:
|
| 521 |
+
|
| 522 |
+
```
|
| 523 |
+
Instruction: <TextHere> \nFollow the text instruction based on the following audio: <SpeechHere>
|
| 524 |
+
```
|
| 525 |
+
|
| 526 |
+
For MERaLiON-2-10B-ASR, it is strongly recommended to stick to this template, i.e., replace `<TextHere>` with your text instruction while leaving the `<SpeechHere>` untouched. We list a few useful example prompts here:
|
| 527 |
+
|
| 528 |
+
**Standard prompts for better accuracy**
|
| 529 |
+
|
| 530 |
+
```python
|
| 531 |
+
prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"
|
| 532 |
+
|
| 533 |
+
transcription_prompt = prompt_template.format(query="Please transcribe the speech")
|
| 534 |
+
translation_prompt = prompt_template.format(query="Please translate the speech into xxx")
|
| 535 |
+
```
|
| 536 |
+
|
| 537 |
+
> [!WARNING]
|
| 538 |
+
> Other prompts might not perform well on MERaLiON-2-10B-ASR.
|
| 539 |
+
|
| 540 |
+
### Huggingface Inference with CPU
|
| 541 |
+
|
| 542 |
+
```python
|
| 543 |
+
import librosa
|
| 544 |
+
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
|
| 545 |
+
|
| 546 |
+
repo_id = "MERaLiON/MERaLiON-2-10B-ASR"
|
| 547 |
|
| 548 |
+
processor = AutoProcessor.from_pretrained(
|
| 549 |
+
repo_id,
|
| 550 |
+
trust_remote_code=True,
|
| 551 |
+
)
|
| 552 |
+
model = AutoModelForSpeechSeq2Seq.from_pretrained(
|
| 553 |
+
repo_id,
|
| 554 |
+
use_safetensors=True,
|
| 555 |
+
trust_remote_code=True,
|
| 556 |
+
)
|
| 557 |
+
|
| 558 |
+
prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"
|
| 559 |
+
transcribe_prompt = "Please transcribe this speech."
|
| 560 |
+
translate_prompt = "Can you please translate this speech into written Chinese?"
|
| 561 |
|
| 562 |
+
# batch inference of 2 samples
|
| 563 |
+
conversation = [
|
| 564 |
+
[{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}],
|
| 565 |
+
[{"role": "user", "content": prompt_template.format(query=translate_prompt)}],
|
| 566 |
+
]
|
|
|
|
| 567 |
|
| 568 |
+
chat_prompt = processor.tokenizer.apply_chat_template(
|
| 569 |
+
conversation=conversation,
|
| 570 |
+
tokenize=False,
|
| 571 |
+
add_generation_prompt=True
|
| 572 |
+
)
|
| 573 |
|
| 574 |
+
# Use audio at 16000hz.
|
| 575 |
+
audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000)
|
| 576 |
+
audio_array = [audio_array]*2
|
| 577 |
+
inputs = processor(text=chat_prompt, audios=audio_array)
|
| 578 |
|
| 579 |
+
# adjust the `max_new_tokens` based on your use case.
|
| 580 |
+
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 581 |
+
generated_ids = outputs[:, inputs['input_ids'].size(1):]
|
| 582 |
+
response = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
| 583 |
+
```
|
| 584 |
|
| 585 |
+
### Huggingface GPU Inference
|
|
|
|
| 586 |
|
| 587 |
```python
|
| 588 |
import torch
|
| 589 |
import librosa
|
|
|
|
| 590 |
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
|
| 591 |
|
| 592 |
+
repo_id = "MERaLiON/MERaLiON-2-10B-ASR"
|
| 593 |
device = "cuda"
|
| 594 |
|
|
|
|
| 595 |
processor = AutoProcessor.from_pretrained(
|
| 596 |
repo_id,
|
| 597 |
trust_remote_code=True,
|
|
|
|
| 604 |
torch_dtype=torch.bfloat16
|
| 605 |
).to(device)
|
| 606 |
|
| 607 |
+
prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"
|
| 608 |
+
transcribe_prompt = "Please transcribe this speech."
|
| 609 |
+
translate_prompt = "Can you please translate this speech into written Chinese?"
|
| 610 |
|
| 611 |
+
# batch inference of 2 samples
|
| 612 |
conversation = [
|
| 613 |
+
[{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}],
|
| 614 |
+
[{"role": "user", "content": prompt_template.format(query=translate_prompt)}],
|
|
|
|
| 615 |
]
|
| 616 |
|
| 617 |
chat_prompt = processor.tokenizer.apply_chat_template(
|
|
|
|
| 620 |
add_generation_prompt=True
|
| 621 |
)
|
| 622 |
|
| 623 |
+
# Use audio at 16000hz.
|
| 624 |
+
audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000)
|
| 625 |
+
audio_array = [audio_array]*2
|
| 626 |
+
inputs = processor(text=chat_prompt, audios=audio_array)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 627 |
|
| 628 |
+
for key, value in inputs.items():
|
| 629 |
+
if isinstance(value, torch.Tensor):
|
| 630 |
+
inputs[key] = inputs[key].to(device)
|
| 631 |
|
| 632 |
+
if value.dtype == torch.float32:
|
| 633 |
+
inputs[key] = inputs[key].to(torch.bfloat16)
|
| 634 |
|
| 635 |
+
# adjust the `max_new_tokens` based on your use case.
|
| 636 |
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 637 |
generated_ids = outputs[:, inputs['input_ids'].size(1):]
|
| 638 |
response = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
|
|
|
|
|
|
| 639 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 640 |
|
| 641 |
## ⚠️ Disclaimer
|
| 642 |
|