Phi-4-mm-inst-asr-singlish
Phi-4-multimodal-instruct-asr-singlish (Phi-4-mm-inst-asr-singlish) represents a targeted effort to address a key limitation that broad LMMs such as Microsoft’s Phi-4 still have: under-representation of regional dialects. Singlish’s code-switching and distinctive prosody frequently confounds generic models.
However, Phi-4 has undergone vast pre-training that already captures complex linguistic structures, promising better generalisation than smaller ASR systems like Whisper. This targeted adaptation of Phi-4-multimodal-instruct (Phi-4-mm-inst) marks progress toward the broader vision of a unified model that can listen, comprehend, and respond naturally—laying the groundwork for voice-first agents that reason, translate, and generate code seamlessly within a single contextual framework.
Model Details
- Developed by: Ming Jie Wong
- Base Model: microsoft/Phi-4-multimodal-instruct
- Model Type: Decoder-only Transformer with vision / speech adapters
- Metrics: Word Error Rate (WER)
- Languages Supported: English (with a focus on Singlish)
- License: MIT
Description
This work employs supervised fine-tuning (SFT) of Phi-4-mm-inst for Singlish ASR by leveraging 66.9k paired audio–transcript examples. The dataset is derived exclusively from the Part 3 Same Room Environment Close-talk Mic recordings of IMDA's NSC Corpus.
Rather than retraining all model parameters, we selectively unfreeze only the audio_embed
module—specifically its encoder and audio projection layers—while keeping the remaining weights fixed. During training, each audio clip is paired with its ground-truth transcript, to which we append a dedicated end-of-transcription marker (<|end|><|endoftext|>
). We then optimize a standard cross-entropy loss over the token sequences, teaching the model both to transcribe audio features into text and to generate the marker at transcription end. This surgical, data-driven approach focuses computational resources on adapting the model’s audio processing to Singlish’s unique phonetic, prosodic, and code-switching characteristics, without altering its core language understanding.
The original Part 3 of the National Speech Corpus comprises approximately 1,000 hours of conversational speech from around 1,000 local English speakers, recorded in pairs. These conversations cover everyday topics and include interactive game-based dialogues. Recordings were conducted in two environments:
- Same Room, where speakers shared a room and were recorded using a close-talk mic and a boundary mic.
- Separate Room, where each speaker was recorded individually using a standing mic and a telephone (IVR).
Audio segments for the internal dataset were extracted using these criteria:
Minimum Word Count: 10 words
This threshold was chosen to ensure that each audio segment contains sufficient linguistic context for the model to better understand instructions in Singlish. Shorter segments may bias the model towards specific utterances or phrases, limiting its overall comprehension.
Maximum Duration: 20 seconds
This threshold was chosen to provide enough context for accurate transcription while minimizing noise and computational complexity for longer audio segments.
Sampling Rate: All audio segments are down-sampled to 16kHz.
Full experiments details will be added soon.
Fine-Tuning Details
We applied fine-tuning on a single A100-80GB GPU.
Training Hyperparameters
The following hyperparameters are used:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- Optimizer:
- Name: ADAMW_TORCH
- Betas: (0.9, 0.99)
- Epsilon: 1e-07
- Optimizer Arguments: No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Benchmark Performance
We evaluated Phi-4-mm-inst-asr-singlish on the following datasets:
SASRBench-v1: A benchmark dataset for evaluating ASR performance on Singlish.
AMI: A widely used dataset for meeting transcription and diarization tasks. This work specifically uses the IHM (Individual Headset Microphone) recordings.
GigaSpeech: A large-scale open-source dataset with diverse English audio, covering read, conversational, and spontaneous speech.
Model Performance
Dataset | Model | Rel. RTFx | WER |
---|---|---|---|
SASRBench-v1 | microsoft/Phi-4-multimodal-instruct | 1.00 | 33.00% |
SASRBench-v1 | mjwong/Phi-4-mm-inst-asr-singlish | 1.03 | 13.16% |
SASRBench-v1 | mjwong/whisper-large-v3-singlish | 2.60 | 16.41% |
SASRBench-v1 | mjwong/whisper-large-v3-turbo-singlish | 6.13 | 13.35% |
SASRBench-v1 | mjwong/whisper-large-v3-singlish + DRAFT | 5.72 | 14.84% |
AMI | microsoft/Phi-4-multimodal-instruct | 1.00 | 14.74% |
AMI | mjwong/Phi-4-mm-inst-asr-singlish | 1.11 | 20.23% |
AMI | mjwong/whisper-large-v3-singlish | 1.14 | 23.72% |
AMI | mjwong/whisper-large-v3-turbo-singlish | 1.75 | 16.99% |
AMI | mjwong/whisper-large-v3-singlish + DRAFT | 2.59 | 22.06% |
GigaSpeech | microsoft/Phi-4-multimodal-instruct | 1.00 | 24.65% |
GigaSpeech | mjwong/Phi-4-mm-inst-asr-singlish | 1.20 | 10.34% |
GigaSpeech | mjwong/whisper-large-v3-singlish | 2.03 | 13.15% |
GigaSpeech | mjwong/whisper-large-v3-turbo-singlish | 3.97 | 11.54% |
GigaSpeech | mjwong/whisper-large-v3-singlish + DRAFT | 4.81 | 12.81% |
Experimental Observations
Base vs. Fine‑Tuned Behavior
Base Model: Phi‑4’s generalist design allowed instruction‑based transcription but lacked a robust stopping criterion. When prompted to generate a fixed number of tokens, it often continued past the audio’s end, repeating or fabricating tokens until the max_new_tokens
limit or an implicit end‑of‑sequence signal was reached.
Fine‑Tuned Model: By associating the end‑of‑transcription markers during training, the model learned task‑specific stopping. Even with a high max_new_tokens
setting, it reliably generated <|end|><|endoftext|>
immediately after completing the actual transcription, avoiding extraneous output.
Behavior on Long Audio Clips
The output length remains bounded by max_new_tokens
, irrespective of input duration. For clips requiring fewer tokens than the limit, the fine‑tuned model cleanly stops at the marker. For longer clips, it produces a truncated but well‑formed transcription up to the token limit, without failing or crashing.
Conclusion
Fine-tuning Phi-4-mm-inst slashes its Singlish WER from 33% to 13.16%, closing—and slightly beating—the gap to our best-performing fine-tuned Whisper-large-v3-turbo-singlish. While the absolute edge over Whisper is small, Phi-4’s real value is that it combines near–state-of-the-art ASR with a full generative LLM in one package. For Singlish speakers this means a single model that hears, understands, and responds natively, paving the way for voice-first agents that can reason, translate, or generate code without ever leaving the same context.
Disclaimer
While this model has been fine-tuned to better recognize Singlish, users may experience inaccuracies, biases, or unexpected outputs, particularly in challenging audio conditions or with speakers using non-standard variations. Use of this model is at your own risk; the developers and distributors are not liable for any consequences arising from its use. Please validate results before deploying in any sensitive or production environment.
How to use the model
For first-time use: You might need to install the additional libraries below:
!pip install backoff
!sudo apt-get install -y cmake ninja-build
!pip install wheel
from pkg_resources import get_distribution, DistributionNotFound
package_name = 'flash_attn'
try:
dist = get_distribution(package_name)
print(f"'{package_name}' version {dist.version} is already installed.")
except DistributionNotFound:
!MAX_JOBS=8 pip install flash-attn --no-build-isolation
The model can be loaded like so:
import torch
import soundfile
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
model_path = "mjwong/Phi-4-mm-inst-asr-singlish"
kwargs = {}
kwargs['torch_dtype'] = torch.bfloat16
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype='auto',
_attn_implementation='flash_attention_2',
).cuda()
generation_config = GenerationConfig.from_pretrained(model_path, 'generation_config.json')
user_prompt = '<|user|>'
assistant_prompt = '<|assistant|>'
prompt_suffix = '<|end|>'
speech_prompt = "Based on the attached audio, generate a comprehensive text transcription of the spoken content."
prompt = f'{user_prompt}<|audio_1|>{speech_prompt}{prompt_suffix}{assistant_prompt}'
You can then transcribe audios of arbitrary length. As an illustration, the audio file ignite.wav
can be downloaded from this link.
audio = soundfile.read('./ignite.wav')
inputs = processor(text=prompt, audios=[audio], return_tensors='pt').to('cuda:0')
generate_ids = model.generate(
**inputs,
max_new_tokens=1200,
generation_config=generation_config,
num_logits_to_keep=1,
)
generate_ids = generate_ids[:, inputs['input_ids'].shape[1] :]
response = processor.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(response)
Contact
For more information, please reach out to [email protected].
- Downloads last month
- 61
Model tree for mjwong/Phi-4-mm-inst-asr-singlish
Base model
microsoft/Phi-4-multimodal-instructCollection including mjwong/Phi-4-mm-inst-asr-singlish
Evaluation results
- WER on SASRBench-v1test set self-reported13.160
- WER on AMItest set self-reported20.230
- WER on GigaSpeechtest set self-reported10.340