CarelessWhisper - Causal Whisper Streaming Model

Causal Whisper Streaming is a fine tuned version of OpenAI Whisper, which can handle causal data and perform real-time transcription.

arXiv Demo on Hugging Face

πŸ“„ Paper

For more details, see our paper.

πŸ”§ Setup

We used Python 3.9.16, PyTorch 2.6.0, and PyTorch-Lightning 2.5.0 to train and test our models. Portions of this code are adapted from OpenAI's Whisper.

To set up the project environment using conda, follow these steps:

  1. Clone the repository
    git clone https://github.com/tomer9080/CarelessWhisper-streaming
    cd CarelessWhisper-streaming
    

πŸ’‘ Make sure you have Miniconda or Anaconda installed before proceeding.

  1. Create the conda environment

    conda env create -f environment.yml
    
  2. Activate The environment

    conda activate careless_whisper
    
  3. Install the appropriate PyTorch version
    Depending on your hardware and CUDA version, install PyTorch by following the instructions at https://pytorch.org/get-started/locally.
    This project was tested with CUDA 12.4, but it should also work with compatible earlier or later versions.

After installing all of the dependencies, you can try to run inference.

πŸ€– Available Models

We fine-tuned three different sizes of Whisper, all support english only transcription. A large-v2 that was fine tuned on multilingual data is available, and supports English, French, Spanish, German and Portuguese with chunk size of 300 miliseconds.

Size Chunk Size [msec] Multilingual
base 40, 100, 200, 300 N/A
small 40, 100, 200, 300, 1000 N/A
large-v2 40, 100, 200, 300, 1000 300

🎀 Running Inference

To run inference, download the repo content, and run from the repository root accroding to following sections.

Note: The models are hosted on the Hugging Face Hub, which requires an access token.
Make sure you are logged in with your token to access the models.

How to Apply Your Hugging Face πŸ€— Access Token

  1. Create a Hugging Face account (if you don’t have one) at https://huggingface.co/join.

  2. Generate an access token:

    • Go to your Hugging Face account settings: https://huggingface.co/settings/tokens
    • Click on "New token", give it a name, select the appropriate scopes (usually read is enough), and create it.
  3. Login using the Hugging Face CLI:
    Install the CLI if you don’t have it:

    pip install huggingface_hub
    

    Then login:

    huggingface-cli login
    

    Paste your token when prompted.

πŸ–₯️ CLI Usage

The transcription model is easily activated using the next command:

# Using a local microphone for streaming transcription, dumping the recording to out.wav
python transcribe.py \
--output_filename out.wav \
--channels 2 \
--model small \ 
--chunk_size 300 \
--device cuda \
--beam_size 5 \
--ca_kv_cache \

A simulation of a stream on a wav file is also available:

# Simulating a stream on a wav file
python transcribe.py \
--model small \
--chunk_size 300 \
--device cuda \
--beam_size 5 \
--ca_kv_cache \
--wav_file /path/to/audio.wav \
--simulate_stream \
--use_latency

🐍 Python Usage

If you prefer using python, a code sinppet utilizing a microphone or a wav file is provided below:

import torch
import careless_whisper_stream

model_size = "small" # model size
chunk_size = 300 # chunk size in milliseconds
multilingual = False # currently on large-v2_300msec supports other languages than english.
device = "cuda" if torch.cuda.is_available() else "cpu"

model = careless_whisper_stream.load_streaming_model(name=model_size,
                                                   gran=chunk_size,
                                                   multilingual=multilingual,
                                                   device=device)

# using a local microphone recording 
texts_microphone = model.transcribe(output_filename="/path/to/dump/file.wav",
                         channels=2,
                         beam_size=5,
                         ca_kv_cache=True)

# Simulating on a wav file
texts_wav_simulation = model.transcribe(simulate_stream=True,
                                        wav_file="/path/to/file/you/want/to/transcribe.wav",
                                        beam_size=5,
                                        ca_kv_cache=True)

🦾 Training

In order to train using LoRA, you can use our existing code. Make sure all the requirements are installed.

πŸ“‚ Dataset Structure

Before starting model training using the command-line interface provided below, you must first configure your dataset dictionary file located at training_code/ds_dict.py.

This file defines a Python dictionary named ds_paths, where you should specify paths to the train, val, and test partitions of your dataset. Each partition should be a CSV file with the following three columns:

  1. wav_path β€” Path to the WAV audio file.
  2. tg_path β€” Path to the corresponding .TextGrid file containing forced alignment.
  3. raw_text β€” Ground truth transcription.

Note: The dictionary key (i.e., the name of the dataset) will be used by the training script to identify and load the dataset correctly.

You can find an example entry in training_code/ds_dict.py.

πŸ–₯️ CLI Interface

python training_code/train.py \
--lora \
--streaming_train \
--simulate_stream \
--dataset LIBRI-960-ALIGNED \
--name example_training_base_model \
--size base \
--batch_size 32 \
--epochs 10 \
--learning_rate 1e-5 \
--rank 32 \
--gran 15 \
--extra_gran_blocks 1 \
--streaming_fraction 0.25 \
--top_k 5 \

For more options and training configurations, run:

python training_code/train.py --help

πŸ“œ License

This repository uses a dual license:

MIT License
Portions derived from OpenAI Whisper are licensed under the MIT License.

CC BY-NC 4.0 License
All other original code in this repository is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

See the LICENSE file for full details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MLSpeech/CarelessWhisper-Streaming

Finetuned
(547)
this model

Datasets used to train MLSpeech/CarelessWhisper-Streaming

Space using MLSpeech/CarelessWhisper-Streaming 1

Evaluation results

  • Word Error Rate (WER) [%] on test-clean
    self-reported
    5.290
  • Aligned-Relative Word Error Rate (ARWER) [%] on test-clean
    self-reported
    6.000
  • Word Error Rate (WER) [%] on test-other
    self-reported
    10.740
  • Aligned-Relative Word Error Rate (ARWER) [%] on test-other
    self-reported
    11.380
  • Word Error Rate (WER) [%] on test-clean
    self-reported
    5.920
  • Aligned-Relative Word Error Rate (ARWER) [%] on test-clean
    self-reported
    6.630
  • Word Error Rate (WER) [%] on test-other
    self-reported
    11.410
  • Aligned-Relative Word Error Rate (ARWER) [%] on test-other
    self-reported
    12.600
  • Word Error Rate (WER) [%] on test-clean
    self-reported
    6.330
  • Aligned-Relative Word Error Rate (ARWER) [%] on test-clean
    self-reported
    7.760