chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_5 | fy the model name (either a Hub model repository id or a path to a directory containing the model), language and language abbreviation to train on, the task type, and the dataset name:
Copied
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
model_name_or_path = "openai/whisper-large-v2"
language = "Marathi"
language_abbr = "mr"
task = "transcribe"
dataset_name = "mozilla-foundation/common_voice_11_0"
You can also log in to your Hugging Fa | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_6 | age_abbr = "mr"
task = "transcribe"
dataset_name = "mozilla-foundation/common_voice_11_0"
You can also log in to your Hugging Face account to save and share your trained model on the Hub if you’d like:
Copied
from huggingface_hub import notebook_login
notebook_login()
Load dataset and metric
The Common Voice 11.0 dataset contains many hours of recorded speech in many different languages. This guide uses the Marathi language as an example | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_7 | 0 dataset contains many hours of recorded speech in many different languages. This guide uses the Marathi language as an example, but feel free to use any other language you’re interested in.
Initialize a DatasetDict structure, and load the train (load both the train+validation split into train) and test splits from the dataset into it:
Copied
from datasets import load_dataset
from datasets import load_dataset, DatasetDict
common_voice = D | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_8 | he dataset into it:
Copied
from datasets import load_dataset
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
common_voice["train"] = load_dataset(dataset_name, language_abbr, split="train+validation", use_auth_token=True)
common_voice["test"] = load_dataset(dataset_name, language_abbr, split="test", use_auth_token=True)
common_voice["train"][0]
Preprocess dataset
Let’s prepare the dataset for training. Load a | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_9 | split="test", use_auth_token=True)
common_voice["train"][0]
Preprocess dataset
Let’s prepare the dataset for training. Load a feature extractor, tokenizer, and processor. You should also pass the language and task to the tokenizer and processor so they know how to process the inputs:
Copied
from transformers import AutoFeatureExtractor, AutoTokenizer, AutoProcessor
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name_or_pa | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_10 | rt AutoFeatureExtractor, AutoTokenizer, AutoProcessor
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, language=language, task=task)
processor = AutoProcessor.from_pretrained(model_name_or_path, language=language, task=task)
You’ll only be training on the sentence and audio columns, so you can remove the rest of the metadata with remove_columns:
Copied | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_11 | ll only be training on the sentence and audio columns, so you can remove the rest of the metadata with remove_columns:
Copied
common_voice = common_voice.remove_columns(
["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"]
)
common_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d1 | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_12 | "path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d18a8a/common_voice_mr_30585613.mp3",
"array": array(
[1.13686838e-13, -1.42108547e-13, -1.98951966e-13, ..., 4.83472422e-06, 3.54798703e-06, 1.63231743e-06]
),
"sampling_rate": 48000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
If you look at t | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_13 | "sampling_rate": 48000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
If you look at the sampling_rate, you’ll see the audio was sampled at 48kHz. The Whisper model was pretrained on audio inputs at 16kHZ which means you’ll need to downsample the audio inputs to match what the model was pretrained on. Downsample the audio by using the cast_column method on the audio column, and set the sampling_rate to | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_14 | model was pretrained on. Downsample the audio by using the cast_column method on the audio column, and set the sampling_rate to 16kHz. The audio input is resampled on the fly the next time you call it:
Copied
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
common_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194 | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_15 | mon_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d18a8a/common_voice_mr_30585613.mp3",
"array": array(
[-3.06954462e-12, -3.63797881e-12, -4.54747351e-12, ..., -7.74800901e-06, -1.74738125e-06, 4.36312439e-06]
),
"sampling_rate": 16000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथ | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_16 | 4738125e-06, 4.36312439e-06]
),
"sampling_rate": 16000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
Once you’ve cleaned up the dataset, you can write a function to generate the correct model inputs. The function should:
Resample the audio inputs to 16kHZ by loading the audio column.
Compute the input features from the audio array using the feature extractor.
Tokenize the sentence column t | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_17 | ng the audio column.
Compute the input features from the audio array using the feature extractor.
Tokenize the sentence column to the input labels.
Copied
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
Apply the prepare_dataset function to | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_18 | _features[0]
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
Apply the prepare_dataset function to the dataset with the map function, and set the num_proc argument to 2 to enable multiprocessing (if map hangs, then set num_proc=1):
Copied
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=2)
Finally, create a DataCollator class to pad the labels in each b | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_19 | remove_columns=common_voice.column_names["train"], num_proc=2)
Finally, create a DataCollator class to pad the labels in each batch to the maximum length, and replace padding with -100 so they’re ignored by the loss function. Then initialize an instance of the data collator:
Copied
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_20 | port dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_fe | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_21 | feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_features = [{"input_ids": feature["labels"]} for feature in features]
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.processor.tokenizer.bos_token_id | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_22 | ut_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
Train
Now that the dataset is ready, you can turn your attention to the model. Start by loading the pretrained openai/whisper-la | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_23 | ain
Now that the dataset is ready, you can turn your attention to the model. Start by loading the pretrained openai/whisper-large-v2 model from AutoModelForSpeechSeq2Seq, and make sure to set the load_in_8bit argument to True to enable int8 quantization. The device_map=auto argument automatically determines how to load and store the model weights:
Copied
from transformers import AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq. | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_24 | ad and store the model weights:
Copied
from transformers import AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto")
You should configure forced_decoder_ids=None because no tokens are used before sampling, and you won’t need to suppress any tokens during generation either:
Copied
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
To get | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_25 | s any tokens during generation either:
Copied
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
To get the model ready for int8 quantization, use the utility function prepare_model_for_int8_training to handle the following:
casts all the non int8 modules to full precision (fp32) for stability
adds a forward hook to the input embedding layer to calculate the gradients of the input hidden states
enables gradient checkpoi | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_26 | adds a forward hook to the input embedding layer to calculate the gradients of the input hidden states
enables gradient checkpointing for more memory-efficient training
Copied
from peft import prepare_model_for_int8_training
model = prepare_model_for_int8_training(model)
Let’s also apply LoRA to the training to make it even more efficient. Load a LoraConfig and configure the following parameters:
r, the dimension of the low-rank matrices
lo | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_27 | e it even more efficient. Load a LoraConfig and configure the following parameters:
r, the dimension of the low-rank matrices
lora_alpha, scaling factor for the weight matrices
target_modules, the name of the attention matrices to apply LoRA to (q_proj and v_proj, or query and value in this case)
lora_dropout, dropout probability of the LoRA layers
bias, set to none
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value as | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_28 | probability of the LoRA layers
bias, set to none
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more weight to the LoRA activations. For performance, we recommend setting bias to None first, and then lora_only, before trying all.
Copied
from peft import LoraConfig, PeftModel, LoraModel, LoraConfig, get_peft_model
config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout= | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_29 | raModel, LoraConfig, get_peft_model
config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none")
After you set up the LoraConfig, wrap it and the base model with the get_peft_model() function to create a PeftModel. Print out the number of trainable parameters to see how much more efficient LoRA is compared to fully training the model!
Copied
model = get_peft_model(model, config)
model.print_t | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_30 | much more efficient LoRA is compared to fully training the model!
Copied
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 15728640 || all params: 1559033600 || trainable%: 1.0088711365810203"
Now you’re ready to define some training hyperparameters in the Seq2SeqTrainingArguments class, such as where to save the model to, batch size, learning rate, and number of epochs to train for. The PeftModel d | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_31 | guments class, such as where to save the model to, batch size, learning rate, and number of epochs to train for. The PeftModel doesn’t have the same signature as the base model, so you’ll need to explicitly set remove_unused_columns=False and label_names=["labels"].
Copied
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="your-name/int8-whisper-large-v2-asr",
per_device_train_bat | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_32 | uments
training_args = Seq2SeqTrainingArguments(
output_dir="your-name/int8-whisper-large-v2-asr",
per_device_train_batch_size=8,
gradient_accumulation_steps=1,
learning_rate=1e-3,
warmup_steps=50,
num_train_epochs=3,
evaluation_strategy="epoch",
fp16=True,
per_device_eval_batch_size=8,
generation_max_length=128,
logging_steps=25,
remove_unused_columns=False,
label_names=["labels"],
)
It is a | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_33 | e=8,
generation_max_length=128,
logging_steps=25,
remove_unused_columns=False,
label_names=["labels"],
)
It is also a good idea to write a custom TrainerCallback to save model checkpoints during training:
Copied
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
class SavePeftModelCallback(TrainerCallback):
def on_save(
self,
args: TrainingArguments,
state: TrainerState,
control | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_34 | k(TrainerCallback):
def on_save(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs,
):
checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
peft_model_path = os.path.join(checkpoint_folder, "adapter_model")
kwargs["model"].save_pretrained(peft_model_path)
pytorch_model_path = os.path. | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_35 | ckpoint_folder, "adapter_model")
kwargs["model"].save_pretrained(peft_model_path)
pytorch_model_path = os.path.join(checkpoint_folder, "pytorch_model.bin")
if os.path.exists(pytorch_model_path):
os.remove(pytorch_model_path)
return control
Pass the Seq2SeqTrainingArguments, model, datasets, data collator, tokenizer, and callback to the Seq2SeqTrainer. You can optionally set model.config.use_cache = F | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_36 | model, datasets, data collator, tokenizer, and callback to the Seq2SeqTrainer. You can optionally set model.config.use_cache = False to silence any warnings. Once everything is ready, call train to start training!
Copied
from transformers import Seq2SeqTrainer, TrainerCallback, Seq2SeqTrainingArguments, TrainerState, TrainerControl
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
| 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_37 | , TrainerControl
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
tokenizer=processor.feature_extractor,
callbacks=[SavePeftModelCallback],
)
model.config.use_cache = False
trainer.train()
Evaluate
Word error rate (WER) is a common metric for evaluating ASR models. Load the WER metric from 🤗 Evaluate:
| 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_38 | .train()
Evaluate
Word error rate (WER) is a common metric for evaluating ASR models. Load the WER metric from 🤗 Evaluate:
Copied
import evaluate
metric = evaluate.load("wer")
Write a loop to evaluate the model performance. Set the model to evaluation mode first, and write the loop with torch.cuda.amp.autocast() because int8 training requires autocasting. Then, pass a batch of examples to the model to evaluate. Get the decoded prediction | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_39 | because int8 training requires autocasting. Then, pass a batch of examples to the model to evaluate. Get the decoded predictions and labels, and add them as a batch to the WER metric before calling compute to get the final WER score:
Copied
from torch.utils.data import DataLoader
from tqdm import tqdm
import numpy as np
import gc
eval_dataloader = DataLoader(common_voice["test"], batch_size=8, collate_fn=data_collator)
model.eval()
for st | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_40 | as np
import gc
eval_dataloader = DataLoader(common_voice["test"], batch_size=8, collate_fn=data_collator)
model.eval()
for step, batch in enumerate(tqdm(eval_dataloader)):
with torch.cuda.amp.autocast():
with torch.no_grad():
generated_tokens = (
model.generate(
input_features=batch["input_features"].to("cuda"),
decoder_input_ids=batch["labels"][:, :4].to("cuda") | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_41 | input_features=batch["input_features"].to("cuda"),
decoder_input_ids=batch["labels"][:, :4].to("cuda"),
max_new_tokens=255,
)
.cpu()
.numpy()
)
labels = batch["labels"].cpu().numpy()
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(generated_tokens, | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_42 | = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
metric.add_batch(
predictions=decoded_preds,
references=decoded_labels,
)
del generated_tokens, labels, batch
gc.collect()
wer = 100 * metric.co | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_43 | references=decoded_labels,
)
del generated_tokens, labels, batch
gc.collect()
wer = 100 * metric.compute()
print(f"{wer=}")
Share model
Once you’re happy with your results, you can upload your model to the Hub with the push_to_hub method:
Copied
model.push_to_hub("your-name/int8-whisper-large-v2-asr")
Inference
Let’s test the model out now!
Instantiate the model configuration from PeftConfig, and from here, | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_44 | er-large-v2-asr")
Inference
Let’s test the model out now!
Instantiate the model configuration from PeftConfig, and from here, you can use the configuration to load the base and PeftModel, tokenizer, processor, and feature extractor. Remember to define the language and task in the tokenizer, processor, and forced_decoder_ids:
Copied
from peft import PeftModel, PeftConfig
peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab"
langua | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_45 | oder_ids:
Copied
from peft import PeftModel, PeftConfig
peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab"
language = "Marathi"
task = "transcribe"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = WhisperForConditionalGeneration.from_pretrained(
peft_config.base_model_name_or_path, load_in_8bit=True, device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = WhisperTokenizer.from_pr | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_46 | d_in_8bit=True, device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = WhisperTokenizer.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)
processor = WhisperProcessor.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)
feature_extractor = processor.feature_extractor
forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)
| 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_47 | ure_extractor = processor.feature_extractor
forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)
Load an audio sample (you can listen to it in the Dataset Preview) to transcribe, and the AutomaticSpeechRecognitionPipeline:
Copied
from transformers import AutomaticSpeechRecognitionPipeline
audio = "https://huggingface.co/datasets/stevhliu/dummy/resolve/main/mrt_01523_00028548203.wav"
pipeline = AutomaticSpeechR | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_48 | ine
audio = "https://huggingface.co/datasets/stevhliu/dummy/resolve/main/mrt_01523_00028548203.wav"
pipeline = AutomaticSpeechRecognitionPipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor)
Then use the pipeline with autocast as a context manager on the audio sample:
Copied
with torch.cuda.amp.autocast():
text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"] | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_49 | a.amp.autocast():
text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"]
text
"मी तुमच्यासाठी काही करू शकतो का?"
| 3348d37e0cea1a003c3eb2670c82d8c3.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_1 | Semantic segmentation using LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to finetune a SegFormer model variant for semantic segmentation.
By using LoRA from 🤗 PEFT, we can reduce the number of trainable parameters in the SegFormer model to only 14% of the original trainable parameters.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention
b | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_2 | ters.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are trained, while the original model parameters are left unchanged.
At inference time, the update matrices are merged with the original model parameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Ins | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_3 | rameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required for model training:
Copied
!pip install transformers accelerate evaluate datasets peft -q
Authenticate to share your model
To share the finetuned model with the community at the end of the training, authenticate using your 🤗 token.
You can obtain your token from | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_4 | finetuned model with the community at the end of the training, authenticate using your 🤗 token.
You can obtain your token from your account settings.
Copied
from huggingface_hub import notebook_login
notebook_login()
Load a dataset
To ensure that this example runs within a reasonable time frame, here we are limiting the number of instances from the training
set of the SceneParse150 dataset to 150.
Copied
from datasets import load_da | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_5 | iting the number of instances from the training
set of the SceneParse150 dataset to 150.
Copied
from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:150]")
Next, split the dataset into train and test sets.
Copied
ds = ds.train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]
Prepare label maps
Create a dictionary that maps a label id to a label class, which will be useful when set | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_6 | t_ds = ds["test"]
Prepare label maps
Create a dictionary that maps a label id to a label class, which will be useful when setting up the model later:
label2id: maps the semantic classes of the dataset to integer ids.
id2label: maps integer ids back to the semantic classes.
Copied
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-hf-doc-builder.json"
id2label = json | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_7 | import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-hf-doc-builder.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
Prepare datasets for training and evaluation
Next, load the SegFormer image processor to prepare the imag | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_8 | els = len(id2label)
Prepare datasets for training and evaluation
Next, load the SegFormer image processor to prepare the images and annotations for the model. This dataset uses the
zero-index as the background class, so make sure to set reduce_labels=True to subtract one from all labels since the
background class is not among the 150 classes.
Copied
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_9 | not among the 150 classes.
Copied
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
Add a function to apply data augmentation to the images, so that the model is more robust against overfitting. Here we use the
ColorJitter function from
torchvision to randomly change the color properties of an image.
Copied
from torchvision.trans | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_10 | the
ColorJitter function from
torchvision to randomly change the color properties of an image.
Copied
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
Add a function to handle grayscale images and ensure that each input image has three color channels, regardless of
whether it was originally grayscale or RGB. The function converts RGB images to array as is, and for | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_11 | lor channels, regardless of
whether it was originally grayscale or RGB. The function converts RGB images to array as is, and for grayscale images
that have only one color channel, the function replicates the same channel three times using np.tile() before converting
the image into an array.
Copied
import numpy as np
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_12 |
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np.expand_dims(np_image, -1), 3)
return Image.fromarray(tiled_image)
else:
return Image.fromarray(np_image)
Finally, combine everything in two functions that you’ll use to transform training and validation data. The two functions
are similar except data augmentation is applied only to the training dat | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_13 | ansform training and validation data. The two functions
are similar except data augmentation is applied only to the training data.
Copied
from PIL import Image
def train_transforms(example_batch):
images = [jitter(handle_grayscale_image(x)) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
image | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_14 | tch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [handle_grayscale_image(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the preprocessing functions over the entire dataset, use the 🤗 Datasets set_transform function:
Copied
train_ds.set_transform(t | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_15 | e preprocessing functions over the entire dataset, use the 🤗 Datasets set_transform function:
Copied
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
Create evaluation function
Including a metric during training is helpful for evaluating your model’s performance. You can load an evaluation
method with the 🤗 Evaluate library. For this task, use
the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate
| ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_16 | uation
method with the 🤗 Evaluate library. For this task, use
the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate
quick tour to learn more about how to load and compute a metric):
Copied
import torch
from torch import nn
import evaluate
metric = evaluate.load("mean_iou")
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_17 | d):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
# currently using _compute instead of compute
# see this i | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_18 | pred_labels = logits_tensor.detach().cpu().numpy()
# currently using _compute instead of compute
# see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
metrics = metric._compute(
predictions=pred_labels,
references=labels,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=image_processor.reduce_labels,
| ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_19 | ls,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=image_processor.reduce_labels,
)
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enu | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_20 | {id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return metrics
Load a base model
Before loading a base model, let’s define a helper function to check the total number of parameters a model has, as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
"""
Prints the number of trainable | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_21 | l
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_22 | trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
Choose a base model checkpoint. For this example, we use the SegFormer B0 variant.
In addition to the checkpoint, pass the label2id and id2label dictionaries to let the AutoModelForSemanticSegmentation class know that we’re
interested in a custom base mode | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_23 | 2id and id2label dictionaries to let the AutoModelForSemanticSegmentation class know that we’re
interested in a custom base model where the decoder head should be randomly initialized using the classes from the custom dataset.
Copied
from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_misma | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_24 | er
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
print_trainable_parameters(model)
At this point you can check with the print_trainable_parameters helper function that all 100% parameters in the base
model (aka model) are trainable.
Wrap the base model as a PeftModel for LoRA training
To leverage the LoRa method, you need to wrap the base model | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_25 | trainable.
Wrap the base model as a PeftModel for LoRA training
To leverage the LoRa method, you need to wrap the base model as a PeftModel. This involves two steps:
Defining LoRa configuration with LoraConfig
Wrapping the original model with get_peft_model() using the config defined in the step above.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=32,
target_modules=["query", "value | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_26 | om peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=32,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="lora_only",
modules_to_save=["decode_head"],
)
lora_model = get_peft_model(model, config)
print_trainable_parameters(lora_model)
Let’s review the LoraConfig. To enable LoRA technique, we must define the target modules within LoraConfig so that
PeftModel can update the necessary m | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_27 | nfig. To enable LoRA technique, we must define the target modules within LoraConfig so that
PeftModel can update the necessary matrices. Specifically, we want to target the query and value matrices in the
attention blocks of the base model. These matrices are identified by their respective names, “query” and “value”.
Therefore, we should specify these names in the target_modules argument of LoraConfig.
After we wrap our base model model with Pe | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_28 | herefore, we should specify these names in the target_modules argument of LoraConfig.
After we wrap our base model model with PeftModel along with the config, we get
a new model where only the LoRA parameters are trainable (so-called “update matrices”) while the pre-trained parameters
are kept frozen. These include the parameters of the randomly initialized classifier parameters too. This is NOT we want
when fine-tuning the base model on our cu | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_29 | parameters of the randomly initialized classifier parameters too. This is NOT we want
when fine-tuning the base model on our custom dataset. To ensure that the classifier parameters are also trained, we
specify modules_to_save. This also ensures that these modules are serialized alongside the LoRA trainable parameters
when using utilities like save_pretrained() and push_to_hub().
In addition to specifying the target_modules within LoraConfig, | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_30 |
when using utilities like save_pretrained() and push_to_hub().
In addition to specifying the target_modules within LoraConfig, we also need to specify the modules_to_save. When
we wrap our base model with PeftModel and pass the configuration, we obtain a new model in which only the LoRA parameters
are trainable, while the pre-trained parameters and the randomly initialized classifier parameters are kept frozen.
However, we do want to train the | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_31 | the pre-trained parameters and the randomly initialized classifier parameters are kept frozen.
However, we do want to train the classifier parameters. By specifying the modules_to_save argument, we ensure that the
classifier parameters are also trainable, and they will be serialized alongside the LoRA trainable parameters when we
use utility functions like save_pretrained() and push_to_hub().
Let’s review the rest of the parameters:
r: The dim | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_32 | ters when we
use utility functions like save_pretrained() and push_to_hub().
Let’s review the rest of the parameters:
r: The dimension used by the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of the bias parameters will be trained.
When all is configured, and the base model is wrapped, the print_trainable_parameters helper function lets us explore
the number of trainabl | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_33 | configured, and the base model is wrapped, the print_trainable_parameters helper function lets us explore
the number of trainable parameters. Since we’re interested in performing parameter-efficient fine-tuning,
we should expect to see a lower number of trainable parameters from the lora_model in comparison to the original model
which is indeed the case here.
You can also manually verify what modules are trainable in the lora_model.
Copied
f | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_34 | inal model
which is indeed the case here.
You can also manually verify what modules are trainable in the lora_model.
Copied
for name, param in lora_model.named_parameters():
if param.requires_grad:
print(name, param.shape)
This confirms that only the LoRA parameters appended to the attention blocks and the decode_head parameters are trainable.
Train the model
Start by defining your training hyperparameters in TrainingArguments | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_35 | he decode_head parameters are trainable.
Train the model
Start by defining your training hyperparameters in TrainingArguments. You can change the values of most parameters however
you prefer. Make sure to set remove_unused_columns=False, otherwise the image column will be dropped, and it’s required here.
The only other required parameter is output_dir which specifies where to save your model.
At the end of each epoch, the Trainer will evalua | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_36 | her required parameter is output_dir which specifies where to save your model.
At the end of each epoch, the Trainer will evaluate the IoU metric and save the training checkpoint.
Note that this example is meant to walk you through the workflow when using PEFT for semantic segmentation. We didn’t
perform extensive hyperparameter tuning to achieve optimal results.
Copied
model_name = checkpoint.split("/")[-1]
training_args = TrainingArgument | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_37 | rparameter tuning to achieve optimal results.
Copied
model_name = checkpoint.split("/")[-1]
training_args = TrainingArguments(
output_dir=f"{model_name}-scene-parse-150-lora",
learning_rate=5e-4,
num_train_epochs=50,
per_device_train_batch_size=4,
per_device_eval_batch_size=2,
save_total_limit=3,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_steps=5,
remove_unused_columns=False,
push | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_38 | it=3,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_steps=5,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
)
Pass the training arguments to Trainer along with the model, dataset, and compute_metrics function.
Call train() to finetune your model.
Copied
trainer = Trainer(
model=lora_model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
comput | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_39 | rainer = Trainer(
model=lora_model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train()
Save the model and run inference
Use the save_pretrained() method of the lora_model to save the LoRA-only parameters locally.
Alternatively, use the push_to_hub() method to upload these parameters directly to the Hugging Face Hub
(as shown in the Image classification us | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_40 | se the push_to_hub() method to upload these parameters directly to the Hugging Face Hub
(as shown in the Image classification using LoRA task guide).
Copied
model_id = "segformer-scene-parse-150-lora"
lora_model.save_pretrained(model_id)
We can see that the LoRA-only parameters are just 2.2 MB in size! This greatly improves the portability when using very large models.
Copied
!ls -lh {model_id}
total 2.2M
-rw-r--r-- 1 root root 369 Feb | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_41 | improves the portability when using very large models.
Copied
!ls -lh {model_id}
total 2.2M
-rw-r--r-- 1 root root 369 Feb 8 03:09 adapter_config.json
-rw-r--r-- 1 root root 2.2M Feb 8 03:09 adapter_model.bin
Let’s now prepare an inference_model and run inference.
Copied
from peft import PeftConfig
config = PeftConfig.from_pretrained(model_id)
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_42 | eftConfig.from_pretrained(model_id)
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
inference_model = PeftModel.from_pretrained(model, model_id)
Get an image:
Copied
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png"
image = Image.open(requests.get(url, stream=True).raw)
im | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_43 | /huggingface/documentation-images/resolve/main/semantic-seg-image.png"
image = Image.open(requests.get(url, stream=True).raw)
image
Preprocess the image to prepare for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
Run inference with the encoded image.
Copied
with torch.no_grad():
outputs = inference_model(pixel_values=encoding.pixel_values)
logits = outputs.logits
upsampled_logits = nn.f | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_44 | o_grad():
outputs = inference_model(pixel_values=encoding.pixel_values)
logits = outputs.logits
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
Next, visualize the results. We need a color palette for this. Here, we use ade_palette(). As it is a long array, so
we don’t include it in this guide, please copy | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_45 | d a color palette for this. Here, we use ade_palette(). As it is a long array, so
we don’t include it in this guide, please copy it from the TensorFlow Model Garden repository.
Copied
import matplotlib.pyplot as plt
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[..., :: | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_46 | de_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[..., ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show()
As you can see, the results are far from perfect, however, this example is designed to illustrate the end-to-end workflow o | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_47 | ow()
As you can see, the results are far from perfect, however, this example is designed to illustrate the end-to-end workflow of
fine-tuning a semantic segmentation model with LoRa technique, and is not aiming to achieve state-of-the-art
results. The results you see here are the same as you would get if you performed full fine-tuning on the same setup (same
model variant, same dataset, same training schedule, etc.), except LoRA allows to achie | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_48 | full fine-tuning on the same setup (same
model variant, same dataset, same training schedule, etc.), except LoRA allows to achieve them with a fraction of total
trainable parameters and in less time.
If you wish to use this example and improve the results, here are some things that you can try:
Increase the number of training samples.
Try a larger SegFormer model variant (explore available model variants on the Hugging Face Hub).
Try different | ac9ce4df51e3faf3c5f330e30fe6f728.txt |
ac9ce4df51e3faf3c5f330e30fe6f728.txt_chunk_49 | raining samples.
Try a larger SegFormer model variant (explore available model variants on the Hugging Face Hub).
Try different values for the arguments available in LoraConfig.
Tune the learning rate and batch size.
| ac9ce4df51e3faf3c5f330e30fe6f728.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_1 | Quicktour
🤗 PEFT contains parameter-efficient finetuning methods for training large pretrained models. The traditional paradigm is to finetune all of a model’s parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adapta | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_2 | tead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters.
This quicktour will show you 🤗 PEFT’s main features and help you train large pretrained models that would typically be inaccessible on consumer devices. You’ll see how to train the 1.2B parameter bigscience/mt0-large model with LoRA to generate a classification | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_3 | n consumer devices. You’ll see how to train the 1.2B parameter bigscience/mt0-large model with LoRA to generate a classification label and use it for inference.
PeftConfig
Each 🤗 PEFT method is defined by a PeftConfig class that stores all the important parameters for building a PeftModel.
Because you’re going to use LoRA, you’ll need to load and create a LoraConfig class. Within LoraConfig, specify the following parameters:
the task_type, | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_4 | se LoRA, you’ll need to load and create a LoraConfig class. Within LoraConfig, specify the following parameters:
the task_type, or sequence-to-sequence language modeling in this case
inference_mode, whether you’re using the model for inference or not
r, the dimension of the low-rank matrices
lora_alpha, the scaling factor for the low-rank matrices
lora_dropout, the dropout probability of the LoRA layers
Copied
from peft import LoraConfig, Ta | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_5 | tor for the low-rank matrices
lora_dropout, the dropout probability of the LoRA layers
Copied
from peft import LoraConfig, TaskType
peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
💡 See the LoraConfig reference for more details about other parameters you can adjust.
PeftModel
A PeftModel is created by the get_peft_model() function. It takes a base model - which you can | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_6 | ters you can adjust.
PeftModel
A PeftModel is created by the get_peft_model() function. It takes a base model - which you can load from the 🤗 Transformers library - and the PeftConfig containing the instructions for how to configure a model for a specific 🤗 PEFT method.
Start by loading the base model you want to finetune.
Copied
from transformers import AutoModelForSeq2SeqLM
model_name_or_path = "bigscience/mt0-large"
tokenizer_name_or_ | 07f8c92a5b1e1218d01d76b64b6741e0.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.