chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_14 | t_dir, "text_encoder")
if os.path.exists(text_encoder_sub_dir) and base_model_name_or_path is None:
config = LoraConfig.from_pretrained(text_encoder_sub_dir)
base_model_name_or_path = config.base_model_name_or_path
if base_model_name_or_path is None:
raise ValueError("Please specify the base model name or path")
pipe = StableDiffusionPipeline.from_pretrained(base_model_name_or_path, torch_dtype=dtype).to(de | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_15 | base model name or path")
pipe = StableDiffusionPipeline.from_pretrained(base_model_name_or_path, torch_dtype=dtype).to(device)
pipe.unet = PeftModel.from_pretrained(pipe.unet, unet_sub_dir, adapter_name=adapter_name)
if os.path.exists(text_encoder_sub_dir):
pipe.text_encoder = PeftModel.from_pretrained(
pipe.text_encoder, text_encoder_sub_dir, adapter_name=adapter_name
)
if dtype in (torch.float1 | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_16 | trained(
pipe.text_encoder, text_encoder_sub_dir, adapter_name=adapter_name
)
if dtype in (torch.float16, torch.bfloat16):
pipe.unet.half()
pipe.text_encoder.half()
pipe.to(device)
return pipe
Now you can use the function above to create a Stable Diffusion pipeline using the LoRA weights that you have created during the fine-tuning step.
Note, if you’re running inference on the same machine, the | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_17 | g the LoRA weights that you have created during the fine-tuning step.
Note, if you’re running inference on the same machine, the path you specify here will be the same as OUTPUT_DIR.
Copied
pipe = get_lora_sd_pipeline(Path("path-to-saved-model"), adapter_name="dog")
Once you have the pipeline with your fine-tuned model, you can use it to generate images:
Copied
prompt = "sks dog playing fetch in the park"
negative_prompt = "low quality | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_18 | model, you can use it to generate images:
Copied
prompt = "sks dog playing fetch in the park"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image.save("DESTINATION_PATH_FOR_THE_IMAGE")
Multi-adapter inference
With PEFT you can combine multiple adapters for inference. In the previous example you have fine-tuned Stable Diffusion | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_19 | erence
With PEFT you can combine multiple adapters for inference. In the previous example you have fine-tuned Stable Diffusion on
some dog images. The pipeline created based on these weights got a name - adapter_name="dog. Now, suppose you also fine-tuned
this base model on images of a crochet toy. Let’s see how we can use both adapters.
First, you’ll need to perform all the steps as in the single adapter inference example:
Specify the base | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_20 | an use both adapters.
First, you’ll need to perform all the steps as in the single adapter inference example:
Specify the base model.
Add a function that creates a Stable Diffusion pipeline for image generation uses LoRA weights.
Create a pipe with adapter_name="dog" based on the model fine-tuned on dog images.
Next, you’re going to need a few more helper functions.
To load another adapter, create a load_adapter() function that leverages load_ | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_21 | you’re going to need a few more helper functions.
To load another adapter, create a load_adapter() function that leverages load_adapter() method of PeftModel (e.g. pipe.unet.load_adapter(peft_model_path, adapter_name)):
Copied
def load_adapter(pipe, ckpt_dir, adapter_name):
unet_sub_dir = os.path.join(ckpt_dir, "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
pipe.unet.load_adapter(unet_sub_dir, adapter_name= | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_22 | "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
pipe.unet.load_adapter(unet_sub_dir, adapter_name=adapter_name)
if os.path.exists(text_encoder_sub_dir):
pipe.text_encoder.load_adapter(text_encoder_sub_dir, adapter_name=adapter_name)
To switch between adapters, write a function that uses set_adapter() method of PeftModel (see pipe.unet.set_adapter(adapter_name))
Copied
def set_adapter(pipe, adapter_na | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_23 | that uses set_adapter() method of PeftModel (see pipe.unet.set_adapter(adapter_name))
Copied
def set_adapter(pipe, adapter_name):
pipe.unet.set_adapter(adapter_name)
if isinstance(pipe.text_encoder, PeftModel):
pipe.text_encoder.set_adapter(adapter_name)
Finally, add a function to create weighted LoRA adapter.
Copied
def create_weighted_lora_adapter(pipe, adapters, weights, adapter_name="default"):
pipe.unet.add_weigh | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_24 | A adapter.
Copied
def create_weighted_lora_adapter(pipe, adapters, weights, adapter_name="default"):
pipe.unet.add_weighted_adapter(adapters, weights, adapter_name)
if isinstance(pipe.text_encoder, PeftModel):
pipe.text_encoder.add_weighted_adapter(adapters, weights, adapter_name)
return pipe
Let’s load the second adapter from the model fine-tuned on images of a crochet toy, and give it a unique name:
Copied
load_ada | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_25 | et’s load the second adapter from the model fine-tuned on images of a crochet toy, and give it a unique name:
Copied
load_adapter(pipe, Path("path-to-the-second-saved-model"), adapter_name="crochet")
Create a pipeline using weighted adapters:
Copied
pipe = create_weighted_lora_adapter(pipe, ["crochet", "dog"], [1.0, 1.05], adapter_name="crochet_dog")
Now you can switch between adapters. If you’d like to generate more dog images, set the a | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_26 | .0, 1.05], adapter_name="crochet_dog")
Now you can switch between adapters. If you’d like to generate more dog images, set the adapter to "dog":
Copied
set_adapter(pipe, adapter_name="dog")
prompt = "sks dog in a supermarket isle"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
In the same way, you can switch to the second ada | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_27 | _steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
In the same way, you can switch to the second adapter:
Copied
set_adapter(pipe, adapter_name="crochet")
prompt = "a fish rendered in the style of <1>"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
Finally, you can use combined weighted adapters: | e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_28 | _steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
Finally, you can use combined weighted adapters:
Copied
set_adapter(pipe, adapter_name="crochet_dog")
prompt = "sks dog rendered in the style of <1>, close up portrait, 4K HD"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
| e8dfa1c467776183e9b66635fd4136c1.txt |
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_29 | ipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
| e8dfa1c467776183e9b66635fd4136c1.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_1 | LoRA for token classification
Low-Rank Adaptation (LoRA) is a reparametrization method that aims to reduce the number of trainable parameters with low-rank representations. The weight matrix is broken down into low-rank matrices that are trained and updated. All the pretrained model parameters remain frozen. After training, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoRA mod | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_2 | aining, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoRA model because there are significantly fewer parameters.
💡 Read LoRA: Low-Rank Adaptation of Large Language Models to learn more about LoRA.
This guide will show you how to train a roberta-large model with LoRA on the BioNLP2004 dataset for token classification.
Before you begin, make sure you have all the necessary librar | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_3 | odel with LoRA on the BioNLP2004 dataset for token classification.
Before you begin, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets evaluate seqeval
Setup
Let’s start by importing all the necessary libraries you’ll need:
🤗 Transformers for loading the base roberta-large model and tokenizer, and handling the training loop
🤗 Datasets for loading and preparing the bionlp2004 datase | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_4 | ase roberta-large model and tokenizer, and handling the training loop
🤗 Datasets for loading and preparing the bionlp2004 dataset for training
🤗 Evaluate for evaluating the model’s performance
🤗 PEFT for setting up the LoRA configuration and creating the PEFT model
Copied
from datasets import load_dataset
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
DataCollatorForTokenClassification,
TrainingArg | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_5 | formers import (
AutoModelForTokenClassification,
AutoTokenizer,
DataCollatorForTokenClassification,
TrainingArguments,
Trainer,
)
from peft import get_peft_config, PeftModel, PeftConfig, get_peft_model, LoraConfig, TaskType
import evaluate
import torch
import numpy as np
model_checkpoint = "roberta-large"
lr = 1e-3
batch_size = 16
num_epochs = 10
Load dataset and metric
The BioNLP2004 dataset includes tokens and tags fo | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_6 | a-large"
lr = 1e-3
batch_size = 16
num_epochs = 10
Load dataset and metric
The BioNLP2004 dataset includes tokens and tags for biological structures like DNA, RNA and proteins. Load the dataset:
Copied
bionlp = load_dataset("tner/bionlp2004")
bionlp["train"][0]
{
"tokens": [
"Since",
"HUVECs",
"released",
"superoxide",
"anions",
"in",
"response",
"to",
"TNF",
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_7 |
"released",
"superoxide",
"anions",
"in",
"response",
"to",
"TNF",
",",
"and",
"H2O2",
"induces",
"VCAM-1",
",",
"PDTC",
"may",
"act",
"as",
"a",
"radical",
"scavenger",
".",
],
"tags": [0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
}
The tags val | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_8 | cavenger",
".",
],
"tags": [0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
}
The tags values are defined in the label ids dictionary. The letter that prefixes each label indicates the token position: B is for the first token of an entity, I is for a token inside the entity, and 0 is for a token that is not part of an entity.
Copied
{
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_9 | is for a token that is not part of an entity.
Copied
{
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I-protein": 4,
"B-cell_type": 5,
"I-cell_type": 6,
"B-cell_line": 7,
"I-cell_line": 8,
"B-RNA": 9,
"I-RNA": 10,
}
Then load the seqeval framework which includes several metrics - precision, accuracy, F1, and recall - for evaluating sequence labeling tasks.
Copied
seqeval = evaluate.load("seqev | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_10 | metrics - precision, accuracy, F1, and recall - for evaluating sequence labeling tasks.
Copied
seqeval = evaluate.load("seqeval")
Now you can write an evaluation function to compute the metrics from the model predictions and labels, and return the precision, recall, F1, and accuracy scores:
Copied
label_list = [
"O",
"B-DNA",
"I-DNA",
"B-protein",
"I-protein",
"B-cell_type",
"I-cell_type",
"B-cell_line",
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_11 | "O",
"B-DNA",
"I-DNA",
"B-protein",
"I-protein",
"B-cell_type",
"I-cell_type",
"B-cell_line",
"I-cell_line",
"B-RNA",
"I-RNA",
]
def compute_metrics(p):
predictions, labels = p
predictions = np.argmax(predictions, axis=2)
true_predictions = [
[label_list[p] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
true | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_12 | t[p] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
true_labels = [
[label_list[l] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
results = seqeval.compute(predictions=true_predictions, references=true_labels)
return {
"precision": results["overall_precision"],
"recall": res | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_13 | =true_predictions, references=true_labels)
return {
"precision": results["overall_precision"],
"recall": results["overall_recall"],
"f1": results["overall_f1"],
"accuracy": results["overall_accuracy"],
}
Preprocess dataset
Initialize a tokenizer and make sure you set is_split_into_words=True because the text sequence has already been split into words. However, this doesn’t mean it is tokenized yet (eve | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_14 | into_words=True because the text sequence has already been split into words. However, this doesn’t mean it is tokenized yet (even though it may look like it!), and you’ll need to further tokenize the words into subwords.
Copied
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, add_prefix_space=True)
You’ll also need to write a function to:
Map each token to their respective word with the word_ids method.
Ignore the special tokens b | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_15 | also need to write a function to:
Map each token to their respective word with the word_ids method.
Ignore the special tokens by setting them to -100.
Label the first token of a given entity.
Copied
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"tags"]):
word_ids = tokenized_inputs.word_ | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_16 | it_into_words=True)
labels = []
for i, label in enumerate(examples[f"tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
else:
label_i | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_17 | elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
Use map to apply the tokenize_and_align_labels function to the dataset:
Copied
tokenized_bionlp = bionlp.map(tokenize_and_align_labels, batched=True)
Finally, | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_18 | _align_labels function to the dataset:
Copied
tokenized_bionlp = bionlp.map(tokenize_and_align_labels, batched=True)
Finally, create a data collator to pad the examples to the longest length in a batch:
Copied
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
Train
Now you’re ready to create a PeftModel. Start by loading the base roberta-large model, the number of expected labels, and the id2label and label2id dic | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_19 | ate a PeftModel. Start by loading the base roberta-large model, the number of expected labels, and the id2label and label2id dictionaries:
Copied
id2label = {
0: "O",
1: "B-DNA",
2: "I-DNA",
3: "B-protein",
4: "I-protein",
5: "B-cell_type",
6: "I-cell_type",
7: "B-cell_line",
8: "I-cell_line",
9: "B-RNA",
10: "I-RNA",
}
label2id = {
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_20 | I-cell_line",
9: "B-RNA",
10: "I-RNA",
}
label2id = {
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I-protein": 4,
"B-cell_type": 5,
"I-cell_type": 6,
"B-cell_line": 7,
"I-cell_line": 8,
"B-RNA": 9,
"I-RNA": 10,
}
model = AutoModelForTokenClassification.from_pretrained(
model_checkpoint, num_labels=11, id2label=id2label, label2id=label2id
)
Define the LoraConfig with:
task_type, token | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_21 | ained(
model_checkpoint, num_labels=11, id2label=id2label, label2id=label2id
)
Define the LoraConfig with:
task_type, token classification (TaskType.TOKEN_CLS)
r, the dimension of the low-rank matrices
lora_alpha, scaling factor for the weight matrices
lora_dropout, dropout probability of the LoRA layers
bias, set to all to train all bias parameters
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more we | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_22 | to all to train all bias parameters
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more weight to the LoRA activations. For performance, we recommend setting bias to None first, and then lora_only, before trying all.
Copied
peft_config = LoraConfig(
task_type=TaskType.TOKEN_CLS, inference_mode=False, r=16, lora_alpha=16, lora_dropout=0.1, bias="all"
)
Pass the base model and peft_config to the get_pe | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_23 | CLS, inference_mode=False, r=16, lora_alpha=16, lora_dropout=0.1, bias="all"
)
Pass the base model and peft_config to the get_peft_model() function to create a PeftModel. You can check out how much more efficient training the PeftModel is compared to fully training the base model by printing out the trainable parameters:
Copied
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 1855499 || all par | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_24 | s:
Copied
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 1855499 || all params: 355894283 || trainable%: 0.5213624069370061"
From the 🤗 Transformers library, create a TrainingArguments class and specify where you want to save the model to, the training hyperparameters, how to evaluate the model, and when to save the checkpoints:
Copied
training_args = TrainingArguments(
output_dir="rob | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_25 | s, how to evaluate the model, and when to save the checkpoints:
Copied
training_args = TrainingArguments(
output_dir="roberta-large-lora-token-classification",
learning_rate=lr,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_epochs,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
Pass the model, Train | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_26 | decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
Pass the model, TrainingArguments, datasets, tokenizer, data collator and evaluation function to the Trainer class. The Trainer handles the training loop for you, and when you’re ready, call train to begin!
Copied
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_bionlp["train"],
eval_dataset=to | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_27 | ied
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_bionlp["train"],
eval_dataset=tokenized_bionlp["validation"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
Share model
Once training is complete, you can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted:
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_28 | n store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted:
Copied
from huggingface_hub import notebook_login
notebook_login()
Upload the model to a specific model repository on the Hub with the push_to_hub method:
Copied
model.push_to_hub("your-name/roberta-large-lora-token-classification")
Inference
To use your model for inference, load the configuration and model:
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_29 | name/roberta-large-lora-token-classification")
Inference
To use your model for inference, load the configuration and model:
Copied
peft_model_id = "stevhliu/roberta-large-lora-token-classification"
config = PeftConfig.from_pretrained(peft_model_id)
inference_model = AutoModelForTokenClassification.from_pretrained(
config.base_model_name_or_path, num_labels=11, id2label=id2label, label2id=label2id
)
tokenizer = AutoTokenizer.from_pretr | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_30 | (
config.base_model_name_or_path, num_labels=11, id2label=id2label, label2id=label2id
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(inference_model, peft_model_id)
Get some text to tokenize:
Copied
text = "The activation of IL-2 gene expression and NF-kappa B through CD28 requires reactive oxygen production by 5-lipoxygenase."
inputs = tokenizer(text, return_tensors="pt")
Pa | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_31 | NF-kappa B through CD28 requires reactive oxygen production by 5-lipoxygenase."
inputs = tokenizer(text, return_tensors="pt")
Pass the inputs to the model, and print out the model prediction for each token:
Copied
with torch.no_grad():
logits = model(**inputs).logits
tokens = inputs.tokens()
predictions = torch.argmax(logits, dim=2)
for token, prediction in zip(tokens, predictions[0].numpy()):
print((token, model.config.id2label[pr | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_32 | .argmax(logits, dim=2)
for token, prediction in zip(tokens, predictions[0].numpy()):
print((token, model.config.id2label[prediction]))
("<s>", "O")
("The", "O")
("Ġactivation", "O")
("Ġof", "O")
("ĠIL", "B-DNA")
("-", "O")
("2", "I-DNA")
("Ġgene", "O")
("Ġexpression", "O")
("Ġand", "O")
("ĠNF", "B-protein")
("-", "O")
("k", "I-protein")
("appa", "I-protein")
("ĠB", "I-protein")
("Ġthrough", "O")
("ĠCD", "B-protein")
("28", "I-protein")
("Ġ | 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_33 | "O")
("k", "I-protein")
("appa", "I-protein")
("ĠB", "I-protein")
("Ġthrough", "O")
("ĠCD", "B-protein")
("28", "I-protein")
("Ġrequires", "O")
("Ġreactive", "O")
("Ġoxygen", "O")
("Ġproduction", "O")
("Ġby", "O")
("Ġ5", "B-protein")
("-", "O")
("lip", "I-protein")
("oxy", "I-protein")
("gen", "I-protein")
("ase", "I-protein")
(".", "O")
("</s>", "O")
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
80d498a0b94c0d7f9fccf89ad8868dfc.txt_chunk_34 | rotein")
(".", "O")
("</s>", "O")
| 80d498a0b94c0d7f9fccf89ad8868dfc.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_1 | int8 training for automatic speech recognition
Quantization reduces the precision of floating point data types, decreasing the memory required to store model weights. However, quantization degrades inference performance because you lose information when you reduce the precision. 8-bit or int8 quantization uses only a quarter precision, but it does not degrade performance because it doesn’t just drop the bits or data. Instead, int8 quantizatio | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_2 | quarter precision, but it does not degrade performance because it doesn’t just drop the bits or data. Instead, int8 quantization rounds from one data type to another.
💡 Read the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper to learn more, or you can take a look at the corresponding blog post for a gentler introduction.
This guide will show you how to train a openai/whisper-large-v2 model for multilingual automatic spe | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_3 | for a gentler introduction.
This guide will show you how to train a openai/whisper-large-v2 model for multilingual automatic speech recognition (ASR) using a combination of int8 quantization and LoRA. You’ll train Whisper for multilingual ASR on Marathi from the Common Voice 11.0 dataset.
Before you start, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets accelerate evaluate jiwer bit | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_4 | u have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets accelerate evaluate jiwer bitsandbytes
Setup
Let’s take care of some of the setup first so you can start training faster later. Set the CUDA_VISIBLE_DEVICES to 0 to use the first GPU on your machine. Then you can specify the model name (either a Hub model repository id or a path to a directory containing the model), language and language abbrev | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_5 | fy the model name (either a Hub model repository id or a path to a directory containing the model), language and language abbreviation to train on, the task type, and the dataset name:
Copied
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
model_name_or_path = "openai/whisper-large-v2"
language = "Marathi"
language_abbr = "mr"
task = "transcribe"
dataset_name = "mozilla-foundation/common_voice_11_0"
You can also log in to your Hugging Fa | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_6 | age_abbr = "mr"
task = "transcribe"
dataset_name = "mozilla-foundation/common_voice_11_0"
You can also log in to your Hugging Face account to save and share your trained model on the Hub if you’d like:
Copied
from huggingface_hub import notebook_login
notebook_login()
Load dataset and metric
The Common Voice 11.0 dataset contains many hours of recorded speech in many different languages. This guide uses the Marathi language as an example | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_7 | 0 dataset contains many hours of recorded speech in many different languages. This guide uses the Marathi language as an example, but feel free to use any other language you’re interested in.
Initialize a DatasetDict structure, and load the train (load both the train+validation split into train) and test splits from the dataset into it:
Copied
from datasets import load_dataset
from datasets import load_dataset, DatasetDict
common_voice = D | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_8 | he dataset into it:
Copied
from datasets import load_dataset
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
common_voice["train"] = load_dataset(dataset_name, language_abbr, split="train+validation", use_auth_token=True)
common_voice["test"] = load_dataset(dataset_name, language_abbr, split="test", use_auth_token=True)
common_voice["train"][0]
Preprocess dataset
Let’s prepare the dataset for training. Load a | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_9 | split="test", use_auth_token=True)
common_voice["train"][0]
Preprocess dataset
Let’s prepare the dataset for training. Load a feature extractor, tokenizer, and processor. You should also pass the language and task to the tokenizer and processor so they know how to process the inputs:
Copied
from transformers import AutoFeatureExtractor, AutoTokenizer, AutoProcessor
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name_or_pa | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_10 | rt AutoFeatureExtractor, AutoTokenizer, AutoProcessor
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, language=language, task=task)
processor = AutoProcessor.from_pretrained(model_name_or_path, language=language, task=task)
You’ll only be training on the sentence and audio columns, so you can remove the rest of the metadata with remove_columns:
Copied | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_11 | ll only be training on the sentence and audio columns, so you can remove the rest of the metadata with remove_columns:
Copied
common_voice = common_voice.remove_columns(
["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"]
)
common_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d1 | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_12 | "path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d18a8a/common_voice_mr_30585613.mp3",
"array": array(
[1.13686838e-13, -1.42108547e-13, -1.98951966e-13, ..., 4.83472422e-06, 3.54798703e-06, 1.63231743e-06]
),
"sampling_rate": 48000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
If you look at t | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_13 | "sampling_rate": 48000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
If you look at the sampling_rate, you’ll see the audio was sampled at 48kHz. The Whisper model was pretrained on audio inputs at 16kHZ which means you’ll need to downsample the audio inputs to match what the model was pretrained on. Downsample the audio by using the cast_column method on the audio column, and set the sampling_rate to | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_14 | model was pretrained on. Downsample the audio by using the cast_column method on the audio column, and set the sampling_rate to 16kHz. The audio input is resampled on the fly the next time you call it:
Copied
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
common_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194 | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_15 | mon_voice["train"][0]
{
"audio": {
"path": "/root/.cache/huggingface/datasets/downloads/extracted/f7e1ef6a2d14f20194999aad5040c5d4bb3ead1377de3e1bbc6e9dba34d18a8a/common_voice_mr_30585613.mp3",
"array": array(
[-3.06954462e-12, -3.63797881e-12, -4.54747351e-12, ..., -7.74800901e-06, -1.74738125e-06, 4.36312439e-06]
),
"sampling_rate": 16000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथ | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_16 | 4738125e-06, 4.36312439e-06]
),
"sampling_rate": 16000,
},
"sentence": "आईचे आजारपण वाढत चालले, तसतशी मथीही नीट खातपीतनाशी झाली.",
}
Once you’ve cleaned up the dataset, you can write a function to generate the correct model inputs. The function should:
Resample the audio inputs to 16kHZ by loading the audio column.
Compute the input features from the audio array using the feature extractor.
Tokenize the sentence column t | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_17 | ng the audio column.
Compute the input features from the audio array using the feature extractor.
Tokenize the sentence column to the input labels.
Copied
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
Apply the prepare_dataset function to | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_18 | _features[0]
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
Apply the prepare_dataset function to the dataset with the map function, and set the num_proc argument to 2 to enable multiprocessing (if map hangs, then set num_proc=1):
Copied
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=2)
Finally, create a DataCollator class to pad the labels in each b | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_19 | remove_columns=common_voice.column_names["train"], num_proc=2)
Finally, create a DataCollator class to pad the labels in each batch to the maximum length, and replace padding with -100 so they’re ignored by the loss function. Then initialize an instance of the data collator:
Copied
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_20 | port dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_fe | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_21 | feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
label_features = [{"input_ids": feature["labels"]} for feature in features]
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.processor.tokenizer.bos_token_id | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_22 | ut_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
Train
Now that the dataset is ready, you can turn your attention to the model. Start by loading the pretrained openai/whisper-la | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_23 | ain
Now that the dataset is ready, you can turn your attention to the model. Start by loading the pretrained openai/whisper-large-v2 model from AutoModelForSpeechSeq2Seq, and make sure to set the load_in_8bit argument to True to enable int8 quantization. The device_map=auto argument automatically determines how to load and store the model weights:
Copied
from transformers import AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq. | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_24 | ad and store the model weights:
Copied
from transformers import AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto")
You should configure forced_decoder_ids=None because no tokens are used before sampling, and you won’t need to suppress any tokens during generation either:
Copied
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
To get | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_25 | s any tokens during generation either:
Copied
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
To get the model ready for int8 quantization, use the utility function prepare_model_for_int8_training to handle the following:
casts all the non int8 modules to full precision (fp32) for stability
adds a forward hook to the input embedding layer to calculate the gradients of the input hidden states
enables gradient checkpoi | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_26 | adds a forward hook to the input embedding layer to calculate the gradients of the input hidden states
enables gradient checkpointing for more memory-efficient training
Copied
from peft import prepare_model_for_int8_training
model = prepare_model_for_int8_training(model)
Let’s also apply LoRA to the training to make it even more efficient. Load a LoraConfig and configure the following parameters:
r, the dimension of the low-rank matrices
lo | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_27 | e it even more efficient. Load a LoraConfig and configure the following parameters:
r, the dimension of the low-rank matrices
lora_alpha, scaling factor for the weight matrices
target_modules, the name of the attention matrices to apply LoRA to (q_proj and v_proj, or query and value in this case)
lora_dropout, dropout probability of the LoRA layers
bias, set to none
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value as | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_28 | probability of the LoRA layers
bias, set to none
💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more weight to the LoRA activations. For performance, we recommend setting bias to None first, and then lora_only, before trying all.
Copied
from peft import LoraConfig, PeftModel, LoraModel, LoraConfig, get_peft_model
config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout= | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_29 | raModel, LoraConfig, get_peft_model
config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none")
After you set up the LoraConfig, wrap it and the base model with the get_peft_model() function to create a PeftModel. Print out the number of trainable parameters to see how much more efficient LoRA is compared to fully training the model!
Copied
model = get_peft_model(model, config)
model.print_t | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_30 | much more efficient LoRA is compared to fully training the model!
Copied
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 15728640 || all params: 1559033600 || trainable%: 1.0088711365810203"
Now you’re ready to define some training hyperparameters in the Seq2SeqTrainingArguments class, such as where to save the model to, batch size, learning rate, and number of epochs to train for. The PeftModel d | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_31 | guments class, such as where to save the model to, batch size, learning rate, and number of epochs to train for. The PeftModel doesn’t have the same signature as the base model, so you’ll need to explicitly set remove_unused_columns=False and label_names=["labels"].
Copied
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="your-name/int8-whisper-large-v2-asr",
per_device_train_bat | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_32 | uments
training_args = Seq2SeqTrainingArguments(
output_dir="your-name/int8-whisper-large-v2-asr",
per_device_train_batch_size=8,
gradient_accumulation_steps=1,
learning_rate=1e-3,
warmup_steps=50,
num_train_epochs=3,
evaluation_strategy="epoch",
fp16=True,
per_device_eval_batch_size=8,
generation_max_length=128,
logging_steps=25,
remove_unused_columns=False,
label_names=["labels"],
)
It is a | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_33 | e=8,
generation_max_length=128,
logging_steps=25,
remove_unused_columns=False,
label_names=["labels"],
)
It is also a good idea to write a custom TrainerCallback to save model checkpoints during training:
Copied
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
class SavePeftModelCallback(TrainerCallback):
def on_save(
self,
args: TrainingArguments,
state: TrainerState,
control | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_34 | k(TrainerCallback):
def on_save(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs,
):
checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
peft_model_path = os.path.join(checkpoint_folder, "adapter_model")
kwargs["model"].save_pretrained(peft_model_path)
pytorch_model_path = os.path. | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_35 | ckpoint_folder, "adapter_model")
kwargs["model"].save_pretrained(peft_model_path)
pytorch_model_path = os.path.join(checkpoint_folder, "pytorch_model.bin")
if os.path.exists(pytorch_model_path):
os.remove(pytorch_model_path)
return control
Pass the Seq2SeqTrainingArguments, model, datasets, data collator, tokenizer, and callback to the Seq2SeqTrainer. You can optionally set model.config.use_cache = F | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_36 | model, datasets, data collator, tokenizer, and callback to the Seq2SeqTrainer. You can optionally set model.config.use_cache = False to silence any warnings. Once everything is ready, call train to start training!
Copied
from transformers import Seq2SeqTrainer, TrainerCallback, Seq2SeqTrainingArguments, TrainerState, TrainerControl
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
| 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_37 | , TrainerControl
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
tokenizer=processor.feature_extractor,
callbacks=[SavePeftModelCallback],
)
model.config.use_cache = False
trainer.train()
Evaluate
Word error rate (WER) is a common metric for evaluating ASR models. Load the WER metric from 🤗 Evaluate:
| 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_38 | .train()
Evaluate
Word error rate (WER) is a common metric for evaluating ASR models. Load the WER metric from 🤗 Evaluate:
Copied
import evaluate
metric = evaluate.load("wer")
Write a loop to evaluate the model performance. Set the model to evaluation mode first, and write the loop with torch.cuda.amp.autocast() because int8 training requires autocasting. Then, pass a batch of examples to the model to evaluate. Get the decoded prediction | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_39 | because int8 training requires autocasting. Then, pass a batch of examples to the model to evaluate. Get the decoded predictions and labels, and add them as a batch to the WER metric before calling compute to get the final WER score:
Copied
from torch.utils.data import DataLoader
from tqdm import tqdm
import numpy as np
import gc
eval_dataloader = DataLoader(common_voice["test"], batch_size=8, collate_fn=data_collator)
model.eval()
for st | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_40 | as np
import gc
eval_dataloader = DataLoader(common_voice["test"], batch_size=8, collate_fn=data_collator)
model.eval()
for step, batch in enumerate(tqdm(eval_dataloader)):
with torch.cuda.amp.autocast():
with torch.no_grad():
generated_tokens = (
model.generate(
input_features=batch["input_features"].to("cuda"),
decoder_input_ids=batch["labels"][:, :4].to("cuda") | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_41 | input_features=batch["input_features"].to("cuda"),
decoder_input_ids=batch["labels"][:, :4].to("cuda"),
max_new_tokens=255,
)
.cpu()
.numpy()
)
labels = batch["labels"].cpu().numpy()
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(generated_tokens, | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_42 | = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
metric.add_batch(
predictions=decoded_preds,
references=decoded_labels,
)
del generated_tokens, labels, batch
gc.collect()
wer = 100 * metric.co | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_43 | references=decoded_labels,
)
del generated_tokens, labels, batch
gc.collect()
wer = 100 * metric.compute()
print(f"{wer=}")
Share model
Once you’re happy with your results, you can upload your model to the Hub with the push_to_hub method:
Copied
model.push_to_hub("your-name/int8-whisper-large-v2-asr")
Inference
Let’s test the model out now!
Instantiate the model configuration from PeftConfig, and from here, | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_44 | er-large-v2-asr")
Inference
Let’s test the model out now!
Instantiate the model configuration from PeftConfig, and from here, you can use the configuration to load the base and PeftModel, tokenizer, processor, and feature extractor. Remember to define the language and task in the tokenizer, processor, and forced_decoder_ids:
Copied
from peft import PeftModel, PeftConfig
peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab"
langua | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_45 | oder_ids:
Copied
from peft import PeftModel, PeftConfig
peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab"
language = "Marathi"
task = "transcribe"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = WhisperForConditionalGeneration.from_pretrained(
peft_config.base_model_name_or_path, load_in_8bit=True, device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = WhisperTokenizer.from_pr | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_46 | d_in_8bit=True, device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = WhisperTokenizer.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)
processor = WhisperProcessor.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)
feature_extractor = processor.feature_extractor
forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)
| 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_47 | ure_extractor = processor.feature_extractor
forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)
Load an audio sample (you can listen to it in the Dataset Preview) to transcribe, and the AutomaticSpeechRecognitionPipeline:
Copied
from transformers import AutomaticSpeechRecognitionPipeline
audio = "https://huggingface.co/datasets/stevhliu/dummy/resolve/main/mrt_01523_00028548203.wav"
pipeline = AutomaticSpeechR | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_48 | ine
audio = "https://huggingface.co/datasets/stevhliu/dummy/resolve/main/mrt_01523_00028548203.wav"
pipeline = AutomaticSpeechRecognitionPipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor)
Then use the pipeline with autocast as a context manager on the audio sample:
Copied
with torch.cuda.amp.autocast():
text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"] | 54c6965499192fc36ce403c8b6b06123.txt |
54c6965499192fc36ce403c8b6b06123.txt_chunk_49 | a.amp.autocast():
text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"]
text
"मी तुमच्यासाठी काही करू शकतो का?"
| 54c6965499192fc36ce403c8b6b06123.txt |
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_1 | Quicktour
🤗 PEFT contains parameter-efficient finetuning methods for training large pretrained models. The traditional paradigm is to finetune all of a model’s parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adapta | da3a9a59e28032fcc1512bc893e65de2.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.