chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_1 | Semantic segmentation using LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to finetune a SegFormer model variant for semantic segmentation.
By using LoRA from 🤗 PEFT, we can reduce the number of trainable parameters in the SegFormer model to only 14% of the original trainable parameters.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention
b | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_2 | ters.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are trained, while the original model parameters are left unchanged.
At inference time, the update matrices are merged with the original model parameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Ins | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_3 | rameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required for model training:
Copied
!pip install transformers accelerate evaluate datasets peft -q
Authenticate to share your model
To share the finetuned model with the community at the end of the training, authenticate using your 🤗 token.
You can obtain your token from | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_4 | finetuned model with the community at the end of the training, authenticate using your 🤗 token.
You can obtain your token from your account settings.
Copied
from huggingface_hub import notebook_login
notebook_login()
Load a dataset
To ensure that this example runs within a reasonable time frame, here we are limiting the number of instances from the training
set of the SceneParse150 dataset to 150.
Copied
from datasets import load_da | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_5 | iting the number of instances from the training
set of the SceneParse150 dataset to 150.
Copied
from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:150]")
Next, split the dataset into train and test sets.
Copied
ds = ds.train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]
Prepare label maps
Create a dictionary that maps a label id to a label class, which will be useful when set | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_6 | t_ds = ds["test"]
Prepare label maps
Create a dictionary that maps a label id to a label class, which will be useful when setting up the model later:
label2id: maps the semantic classes of the dataset to integer ids.
id2label: maps integer ids back to the semantic classes.
Copied
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-hf-doc-builder.json"
id2label = json | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_7 | import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-hf-doc-builder.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
Prepare datasets for training and evaluation
Next, load the SegFormer image processor to prepare the imag | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_8 | els = len(id2label)
Prepare datasets for training and evaluation
Next, load the SegFormer image processor to prepare the images and annotations for the model. This dataset uses the
zero-index as the background class, so make sure to set reduce_labels=True to subtract one from all labels since the
background class is not among the 150 classes.
Copied
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_9 | not among the 150 classes.
Copied
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
Add a function to apply data augmentation to the images, so that the model is more robust against overfitting. Here we use the
ColorJitter function from
torchvision to randomly change the color properties of an image.
Copied
from torchvision.trans | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_10 | the
ColorJitter function from
torchvision to randomly change the color properties of an image.
Copied
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
Add a function to handle grayscale images and ensure that each input image has three color channels, regardless of
whether it was originally grayscale or RGB. The function converts RGB images to array as is, and for | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_11 | lor channels, regardless of
whether it was originally grayscale or RGB. The function converts RGB images to array as is, and for grayscale images
that have only one color channel, the function replicates the same channel three times using np.tile() before converting
the image into an array.
Copied
import numpy as np
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_12 |
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np.expand_dims(np_image, -1), 3)
return Image.fromarray(tiled_image)
else:
return Image.fromarray(np_image)
Finally, combine everything in two functions that you’ll use to transform training and validation data. The two functions
are similar except data augmentation is applied only to the training dat | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_13 | ansform training and validation data. The two functions
are similar except data augmentation is applied only to the training data.
Copied
from PIL import Image
def train_transforms(example_batch):
images = [jitter(handle_grayscale_image(x)) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
image | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_14 | tch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [handle_grayscale_image(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the preprocessing functions over the entire dataset, use the 🤗 Datasets set_transform function:
Copied
train_ds.set_transform(t | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_15 | e preprocessing functions over the entire dataset, use the 🤗 Datasets set_transform function:
Copied
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
Create evaluation function
Including a metric during training is helpful for evaluating your model’s performance. You can load an evaluation
method with the 🤗 Evaluate library. For this task, use
the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate
| 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_16 | uation
method with the 🤗 Evaluate library. For this task, use
the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate
quick tour to learn more about how to load and compute a metric):
Copied
import torch
from torch import nn
import evaluate
metric = evaluate.load("mean_iou")
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_17 | d):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
# currently using _compute instead of compute
# see this i | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_18 | pred_labels = logits_tensor.detach().cpu().numpy()
# currently using _compute instead of compute
# see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
metrics = metric._compute(
predictions=pred_labels,
references=labels,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=image_processor.reduce_labels,
| 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_19 | ls,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=image_processor.reduce_labels,
)
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enu | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_20 | {id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return metrics
Load a base model
Before loading a base model, let’s define a helper function to check the total number of parameters a model has, as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
"""
Prints the number of trainable | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_21 | l
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_22 | trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
Choose a base model checkpoint. For this example, we use the SegFormer B0 variant.
In addition to the checkpoint, pass the label2id and id2label dictionaries to let the AutoModelForSemanticSegmentation class know that we’re
interested in a custom base mode | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_23 | 2id and id2label dictionaries to let the AutoModelForSemanticSegmentation class know that we’re
interested in a custom base model where the decoder head should be randomly initialized using the classes from the custom dataset.
Copied
from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_misma | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_24 | er
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
print_trainable_parameters(model)
At this point you can check with the print_trainable_parameters helper function that all 100% parameters in the base
model (aka model) are trainable.
Wrap the base model as a PeftModel for LoRA training
To leverage the LoRa method, you need to wrap the base model | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_25 | trainable.
Wrap the base model as a PeftModel for LoRA training
To leverage the LoRa method, you need to wrap the base model as a PeftModel. This involves two steps:
Defining LoRa configuration with LoraConfig
Wrapping the original model with get_peft_model() using the config defined in the step above.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=32,
target_modules=["query", "value | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_26 | om peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=32,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="lora_only",
modules_to_save=["decode_head"],
)
lora_model = get_peft_model(model, config)
print_trainable_parameters(lora_model)
Let’s review the LoraConfig. To enable LoRA technique, we must define the target modules within LoraConfig so that
PeftModel can update the necessary m | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_27 | nfig. To enable LoRA technique, we must define the target modules within LoraConfig so that
PeftModel can update the necessary matrices. Specifically, we want to target the query and value matrices in the
attention blocks of the base model. These matrices are identified by their respective names, “query” and “value”.
Therefore, we should specify these names in the target_modules argument of LoraConfig.
After we wrap our base model model with Pe | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_28 | herefore, we should specify these names in the target_modules argument of LoraConfig.
After we wrap our base model model with PeftModel along with the config, we get
a new model where only the LoRA parameters are trainable (so-called “update matrices”) while the pre-trained parameters
are kept frozen. These include the parameters of the randomly initialized classifier parameters too. This is NOT we want
when fine-tuning the base model on our cu | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_29 | parameters of the randomly initialized classifier parameters too. This is NOT we want
when fine-tuning the base model on our custom dataset. To ensure that the classifier parameters are also trained, we
specify modules_to_save. This also ensures that these modules are serialized alongside the LoRA trainable parameters
when using utilities like save_pretrained() and push_to_hub().
In addition to specifying the target_modules within LoraConfig, | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_30 |
when using utilities like save_pretrained() and push_to_hub().
In addition to specifying the target_modules within LoraConfig, we also need to specify the modules_to_save. When
we wrap our base model with PeftModel and pass the configuration, we obtain a new model in which only the LoRA parameters
are trainable, while the pre-trained parameters and the randomly initialized classifier parameters are kept frozen.
However, we do want to train the | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_31 | the pre-trained parameters and the randomly initialized classifier parameters are kept frozen.
However, we do want to train the classifier parameters. By specifying the modules_to_save argument, we ensure that the
classifier parameters are also trainable, and they will be serialized alongside the LoRA trainable parameters when we
use utility functions like save_pretrained() and push_to_hub().
Let’s review the rest of the parameters:
r: The dim | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_32 | ters when we
use utility functions like save_pretrained() and push_to_hub().
Let’s review the rest of the parameters:
r: The dimension used by the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of the bias parameters will be trained.
When all is configured, and the base model is wrapped, the print_trainable_parameters helper function lets us explore
the number of trainabl | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_33 | configured, and the base model is wrapped, the print_trainable_parameters helper function lets us explore
the number of trainable parameters. Since we’re interested in performing parameter-efficient fine-tuning,
we should expect to see a lower number of trainable parameters from the lora_model in comparison to the original model
which is indeed the case here.
You can also manually verify what modules are trainable in the lora_model.
Copied
f | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_34 | inal model
which is indeed the case here.
You can also manually verify what modules are trainable in the lora_model.
Copied
for name, param in lora_model.named_parameters():
if param.requires_grad:
print(name, param.shape)
This confirms that only the LoRA parameters appended to the attention blocks and the decode_head parameters are trainable.
Train the model
Start by defining your training hyperparameters in TrainingArguments | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_35 | he decode_head parameters are trainable.
Train the model
Start by defining your training hyperparameters in TrainingArguments. You can change the values of most parameters however
you prefer. Make sure to set remove_unused_columns=False, otherwise the image column will be dropped, and it’s required here.
The only other required parameter is output_dir which specifies where to save your model.
At the end of each epoch, the Trainer will evalua | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_36 | her required parameter is output_dir which specifies where to save your model.
At the end of each epoch, the Trainer will evaluate the IoU metric and save the training checkpoint.
Note that this example is meant to walk you through the workflow when using PEFT for semantic segmentation. We didn’t
perform extensive hyperparameter tuning to achieve optimal results.
Copied
model_name = checkpoint.split("/")[-1]
training_args = TrainingArgument | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_37 | rparameter tuning to achieve optimal results.
Copied
model_name = checkpoint.split("/")[-1]
training_args = TrainingArguments(
output_dir=f"{model_name}-scene-parse-150-lora",
learning_rate=5e-4,
num_train_epochs=50,
per_device_train_batch_size=4,
per_device_eval_batch_size=2,
save_total_limit=3,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_steps=5,
remove_unused_columns=False,
push | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_38 | it=3,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_steps=5,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
)
Pass the training arguments to Trainer along with the model, dataset, and compute_metrics function.
Call train() to finetune your model.
Copied
trainer = Trainer(
model=lora_model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
comput | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_39 | rainer = Trainer(
model=lora_model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train()
Save the model and run inference
Use the save_pretrained() method of the lora_model to save the LoRA-only parameters locally.
Alternatively, use the push_to_hub() method to upload these parameters directly to the Hugging Face Hub
(as shown in the Image classification us | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_40 | se the push_to_hub() method to upload these parameters directly to the Hugging Face Hub
(as shown in the Image classification using LoRA task guide).
Copied
model_id = "segformer-scene-parse-150-lora"
lora_model.save_pretrained(model_id)
We can see that the LoRA-only parameters are just 2.2 MB in size! This greatly improves the portability when using very large models.
Copied
!ls -lh {model_id}
total 2.2M
-rw-r--r-- 1 root root 369 Feb | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_41 | improves the portability when using very large models.
Copied
!ls -lh {model_id}
total 2.2M
-rw-r--r-- 1 root root 369 Feb 8 03:09 adapter_config.json
-rw-r--r-- 1 root root 2.2M Feb 8 03:09 adapter_model.bin
Let’s now prepare an inference_model and run inference.
Copied
from peft import PeftConfig
config = PeftConfig.from_pretrained(model_id)
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_42 | eftConfig.from_pretrained(model_id)
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
inference_model = PeftModel.from_pretrained(model, model_id)
Get an image:
Copied
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png"
image = Image.open(requests.get(url, stream=True).raw)
im | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_43 | /huggingface/documentation-images/resolve/main/semantic-seg-image.png"
image = Image.open(requests.get(url, stream=True).raw)
image
Preprocess the image to prepare for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
Run inference with the encoded image.
Copied
with torch.no_grad():
outputs = inference_model(pixel_values=encoding.pixel_values)
logits = outputs.logits
upsampled_logits = nn.f | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_44 | o_grad():
outputs = inference_model(pixel_values=encoding.pixel_values)
logits = outputs.logits
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
Next, visualize the results. We need a color palette for this. Here, we use ade_palette(). As it is a long array, so
we don’t include it in this guide, please copy | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_45 | d a color palette for this. Here, we use ade_palette(). As it is a long array, so
we don’t include it in this guide, please copy it from the TensorFlow Model Garden repository.
Copied
import matplotlib.pyplot as plt
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[..., :: | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_46 | de_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[..., ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show()
As you can see, the results are far from perfect, however, this example is designed to illustrate the end-to-end workflow o | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_47 | ow()
As you can see, the results are far from perfect, however, this example is designed to illustrate the end-to-end workflow of
fine-tuning a semantic segmentation model with LoRa technique, and is not aiming to achieve state-of-the-art
results. The results you see here are the same as you would get if you performed full fine-tuning on the same setup (same
model variant, same dataset, same training schedule, etc.), except LoRA allows to achie | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_48 | full fine-tuning on the same setup (same
model variant, same dataset, same training schedule, etc.), except LoRA allows to achieve them with a fraction of total
trainable parameters and in less time.
If you wish to use this example and improve the results, here are some things that you can try:
Increase the number of training samples.
Try a larger SegFormer model variant (explore available model variants on the Hugging Face Hub).
Try different | 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
67f2c0a9b6ab4f4cb1294820fa4c0028.txt_chunk_49 | raining samples.
Try a larger SegFormer model variant (explore available model variants on the Hugging Face Hub).
Try different values for the arguments available in LoraConfig.
Tune the learning rate and batch size.
| 67f2c0a9b6ab4f4cb1294820fa4c0028.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_1 | Tuners
Each tuner (or PEFT method) has a configuration and model.
LoRA
For finetuning a model with LoRA.
class peft.LoraConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
r: int = 8
target_modules: typing.Union[typi | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_2 | : typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
r: int = 8
target_modules: typing.Union[typing.List[str], str, NoneType] = None
lora_alpha: int = 8
lora_dropout: float = 0.0
fan_in_fan_out: bool = False
bias: str = 'none'
modules_to_save: typing.Optional[typing.List[str]] = None
init_lora_weights: bool = True
layers_to_transform: typing.Union[typing.List, int, NoneType] = None
layers_pattern: typing.Optional[ | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_3 | _lora_weights: bool = True
layers_to_transform: typing.Union[typing.List, int, NoneType] = None
layers_pattern: typing.Optional[str] = None
)
Parameters
r (int) — Lora attention dimension.
target_modules (Union[List[str],str]) — The names of the modules to apply Lora to.
lora_alpha (int) — The alpha parameter for Lora scaling.
lora_dropout (float) — The dropout probability for Lora layers.
fan_in_fan_out (bool) — Set this to True i | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_4 | for Lora scaling.
lora_dropout (float) — The dropout probability for Lora layers.
fan_in_fan_out (bool) — Set this to True if the layer to replace stores weight like (fan_in, fan_out).
For example, gpt-2 uses Conv1D which stores weights like (fan_in, fan_out) and hence this should be set to True. —
bias (str) — Bias type for Lora. Can be ‘none’, ‘all’ or ‘lora_only’. If ‘all’ or ‘lora_only’, the
corresponding biases will be updated duri | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_5 | Bias type for Lora. Can be ‘none’, ‘all’ or ‘lora_only’. If ‘all’ or ‘lora_only’, the
corresponding biases will be updated during training. Be aware that this means that, even when disabling
the adapters, the model will not produce the same output as the base model would have without adaptation.
modules_to_save (List[str]) —List of modules apart from LoRA layers to be set as trainable
and saved in the final checkpoint.
layers_to_transform | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_6 | t[str]) —List of modules apart from LoRA layers to be set as trainable
and saved in the final checkpoint.
layers_to_transform (Union[List[int],int]) —
The layer indexes to transform, if this argument is specified, it will apply the LoRA transformations on
the layer indexes that are specified in this list. If a single integer is passed, it will apply the LoRA
transformations on the layer at this index.
layers_pattern (str) —
The layer patter | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_7 | integer is passed, it will apply the LoRA
transformations on the layer at this index.
layers_pattern (str) —
The layer pattern name, used only if layers_to_transform is different from None and if the layer
pattern is not in the common layers pattern.
This is the configuration class to store the configuration of a LoraModel.
class peft.LoraModel
<
source
>
(
model
config
adapter_name
)
→
torch.nn.Module
Parameters
model (PreTrainedMo | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_8 | LoraModel.
class peft.LoraModel
<
source
>
(
model
config
adapter_name
)
→
torch.nn.Module
Parameters
model (PreTrainedModel) — The model to be adapted.
config (LoraConfig) — The configuration of the Lora model.
Returns
torch.nn.Module
The Lora model.
Creates Low Rank Adapter (Lora) model from a pretrained transformers model.
Example:
Copied
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import LoraModel, | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_9 | ined transformers model.
Example:
Copied
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import LoraModel, LoraConfig
>>> config = LoraConfig(
... task_type="SEQ_2_SEQ_LM",
... r=8,
... lora_alpha=32,
... target_modules=["q", "v"],
... lora_dropout=0.01,
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> lora_model = LoraModel(model, config, "default")
Copied
>>> import transfor | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_10 | ModelForSeq2SeqLM.from_pretrained("t5-base")
>>> lora_model = LoraModel(model, config, "default")
Copied
>>> import transformers
>>> from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_int8_training
>>> target_modules = ["q_proj", "k_proj", "v_proj", "out_proj", "fc_in", "fc_out", "wte"]
>>> config = LoraConfig(
... r=4, lora_alpha=16, target_modules=target_modules, lora_dropout=0.1, bias="none", task_type="CAUSAL | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_11 | config = LoraConfig(
... r=4, lora_alpha=16, target_modules=target_modules, lora_dropout=0.1, bias="none", task_type="CAUSAL_LM"
... )
>>> model = transformers.GPTJForCausalLM.from_pretrained(
... "kakaobrain/kogpt",
... revision="KoGPT6B-ryan1.5b-float16", # or float32 version: revision=KoGPT6B-ryan1.5b
... pad_token_id=tokenizer.eos_token_id,
... use_cache=False,
... device_map={"": rank},
... torch_dtype=torch.f | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_12 | ... pad_token_id=tokenizer.eos_token_id,
... use_cache=False,
... device_map={"": rank},
... torch_dtype=torch.float16,
... load_in_8bit=True,
... )
>>> model = prepare_model_for_int8_training(model)
>>> lora_model = get_peft_model(model, config)
Attributes:
model (PreTrainedModel) — The model to be adapted.
peft_config (LoraConfig): The configuration of the Lora model.
add_weighted_adapter
<
source
>
(
adapters
weights
ad | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_13 | adapted.
peft_config (LoraConfig): The configuration of the Lora model.
add_weighted_adapter
<
source
>
(
adapters
weights
adapter_name
combination_type = 'svd'
)
Parameters
adapters (list) — List of adapter names to be merged.
weights (list) — List of weights for each adapter.
adapter_name (str) — Name of the new adapter.
combination_type (str) — Type of merging. Can be one of [svd, linear]
This method adds a new adapter by me | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_14 | the new adapter.
combination_type (str) — Type of merging. Can be one of [svd, linear]
This method adds a new adapter by merging the given adapters with the given weights.
delete_adapter
<
source
>
(
adapter_name
)
Parameters
adapter_name (str) — Name of the adapter to be deleted.
Deletes an existing adapter.
merge_adapter
<
source
>
(
)
This method merges the LoRa layers into the base model.
merge_and_unload
<
source
>
(
| 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_15 | pter.
merge_adapter
<
source
>
(
)
This method merges the LoRa layers into the base model.
merge_and_unload
<
source
>
(
)
This method merges the LoRa layers into the base model. This is needed if someone wants to use the base model
as a standalone model.
Example:
Copied
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModel
>>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b")
>>> p | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_16 | odelForCausalLM
>>> from peft import PeftModel
>>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b")
>>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample"
>>> model = PeftModel.from_pretrained(base_model, peft_model_id)
>>> merged_model = model.merge_and_unload()
unload
<
source
>
(
)
Gets back the base model by removing all the lora modules without merging. This gives back the original base
model | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_17 | urce
>
(
)
Gets back the base model by removing all the lora modules without merging. This gives back the original base
model.
unmerge_adapter
<
source
>
(
)
This method unmerges the LoRa layers from the base model.
class peft.tuners.lora.LoraLayer
<
source
>
(
in_features: int
out_features: int
**kwargs
)
class peft.tuners.lora.Linear
<
source
>
(
adapter_name: str
in_features: int
out_features: int
r: int = 0
lora_alpha: int = | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_18 |
class peft.tuners.lora.Linear
<
source
>
(
adapter_name: str
in_features: int
out_features: int
r: int = 0
lora_alpha: int = 1
lora_dropout: float = 0.0
fan_in_fan_out: bool = False
is_target_conv_1d_layer: bool = False
**kwargs
)
P-tuning
class peft.PromptEncoderConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_19 | [str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tokens: int = None
token_dim: int = None
num_transformer_submodules: typing.Optional[int] = None
num_attention_heads: typing.Optional[int] = None
num_layers: typing.Optional[int] = None
encoder_reparame | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_20 | g.Optional[int] = None
num_attention_heads: typing.Optional[int] = None
num_layers: typing.Optional[int] = None
encoder_reparameterization_type: typing.Union[str, peft.tuners.p_tuning.PromptEncoderReparameterizationType] = <PromptEncoderReparameterizationType.MLP: 'MLP'>
encoder_hidden_size: int = None
encoder_num_layers: int = 2
encoder_dropout: float = 0.0
)
Parameters
encoder_reparameterization_type (Union[PromptEncoderReparameterizatio | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_21 | rs: int = 2
encoder_dropout: float = 0.0
)
Parameters
encoder_reparameterization_type (Union[PromptEncoderReparameterizationType, str]) —
The type of reparameterization to use.
encoder_hidden_size (int) — The hidden size of the prompt encoder.
encoder_num_layers (int) — The number of layers of the prompt encoder.
encoder_dropout (float) — The dropout probability of the prompt encoder.
This is the configuration class to store the c | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_22 | er.
encoder_dropout (float) — The dropout probability of the prompt encoder.
This is the configuration class to store the configuration of a PromptEncoder.
class peft.PromptEncoder
<
source
>
(
config
)
Parameters
config (PromptEncoderConfig) — The configuration of the prompt encoder.
The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.
Example:
Copied
>>> from peft import PromptEncod | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_23 | network that is used to generate the virtual token embeddings for p-tuning.
Example:
Copied
>>> from peft import PromptEncoder, PromptEncoderConfig
>>> config = PromptEncoderConfig(
... peft_type="P_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_reparameterization_type="MLP",
... | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_24 | nsformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_reparameterization_type="MLP",
... encoder_hidden_size=768,
... )
>>> prompt_encoder = PromptEncoder(config)
Attributes:
embedding (torch.nn.Embedding) — The embedding layer of the prompt encoder.
mlp_head (torch.nn.Sequential) — The MLP head of the prompt encoder if inference_mode=False.
lstm_head (torch.nn.LSTM) — The LSTM head of the prompt enc | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_25 | ntial) — The MLP head of the prompt encoder if inference_mode=False.
lstm_head (torch.nn.LSTM) — The LSTM head of the prompt encoder if inference_mode=False and
encoder_reparameterization_type="LSTM".
token_dim (int) — The hidden embedding dimension of the base transformer model.
input_size (int) — The input size of the prompt encoder.
output_size (int) — The output size of the prompt encoder.
hidden_size (int) — The hidden size of the prompt e | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_26 | e prompt encoder.
output_size (int) — The output size of the prompt encoder.
hidden_size (int) — The hidden size of the prompt encoder.
total_virtual_tokens (int): The total number of virtual tokens of the
prompt encoder.
encoder_type (Union[PromptEncoderReparameterizationType, str]): The encoder type of the prompt
encoder.
Input shape: (batch_size, total_virtual_tokens)
Output shape: (batch_size, total_virtual_tokens, token_dim)
Prefix tuning | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_27 | oder.
Input shape: (batch_size, total_virtual_tokens)
Output shape: (batch_size, total_virtual_tokens, token_dim)
Prefix tuning
class peft.PrefixTuningConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tok | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_28 | evision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tokens: int = None
token_dim: int = None
num_transformer_submodules: typing.Optional[int] = None
num_attention_heads: typing.Optional[int] = None
num_layers: typing.Optional[int] = None
encoder_hidden_size: int = None
prefix_projection: bool = False
)
Parameters
encoder_hidden_size (int) — The hidden size of the prom | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_29 | idden_size: int = None
prefix_projection: bool = False
)
Parameters
encoder_hidden_size (int) — The hidden size of the prompt encoder.
prefix_projection (bool) — Whether to project the prefix embeddings.
This is the configuration class to store the configuration of a PrefixEncoder.
class peft.PrefixEncoder
<
source
>
(
config
)
Parameters
config (PrefixTuningConfig) — The configuration of the prefix encoder.
The torch.nn mod | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_30 | <
source
>
(
config
)
Parameters
config (PrefixTuningConfig) — The configuration of the prefix encoder.
The torch.nn model to encode the prefix.
Example:
Copied
>>> from peft import PrefixEncoder, PrefixTuningConfig
>>> config = PrefixTuningConfig(
... peft_type="PREFIX_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_hea | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_31 | 2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_hidden_size=768,
... )
>>> prefix_encoder = PrefixEncoder(config)
Attributes:
embedding (torch.nn.Embedding) — The embedding layer of the prefix encoder.
transform (torch.nn.Sequential) — The two-layer MLP to transform the prefix embeddings if
prefix_projection is True.
pre | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_32 | ncoder.
transform (torch.nn.Sequential) — The two-layer MLP to transform the prefix embeddings if
prefix_projection is True.
prefix_projection (bool) — Whether to project the prefix embeddings.
Input shape: (batch_size, num_virtual_tokens)
Output shape: (batch_size, num_virtual_tokens, 2*layers*hidden)
Prompt tuning
class peft.PromptTuningConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typi | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_33 |
class peft.PromptTuningConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tokens: int = None
token_dim: int = None
num_transformer_submodules: typing.Optional[int] = None
num_attention_heads: typing.Optional | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_34 | s: int = None
token_dim: int = None
num_transformer_submodules: typing.Optional[int] = None
num_attention_heads: typing.Optional[int] = None
num_layers: typing.Optional[int] = None
prompt_tuning_init: typing.Union[peft.tuners.prompt_tuning.PromptTuningInit, str] = <PromptTuningInit.RANDOM: 'RANDOM'>
prompt_tuning_init_text: typing.Optional[str] = None
tokenizer_name_or_path: typing.Optional[str] = None
)
Parameters
prompt_tuning_init (Unio | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_35 | text: typing.Optional[str] = None
tokenizer_name_or_path: typing.Optional[str] = None
)
Parameters
prompt_tuning_init (Union[PromptTuningInit, str]) — The initialization of the prompt embedding.
prompt_tuning_init_text (str, optional) —
The text to initialize the prompt embedding. Only used if prompt_tuning_init is TEXT.
tokenizer_name_or_path (str, optional) —
The name or path of the tokenizer. Only used if prompt_tuning_init is TEXT. | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_36 | is TEXT.
tokenizer_name_or_path (str, optional) —
The name or path of the tokenizer. Only used if prompt_tuning_init is TEXT.
This is the configuration class to store the configuration of a PromptEmbedding.
class peft.PromptEmbedding
<
source
>
(
config
word_embeddings
)
Parameters
config (PromptTuningConfig) — The configuration of the prompt embedding.
word_embeddings (torch.nn.Module) — The word embeddings of the base transform | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_37 | ig) — The configuration of the prompt embedding.
word_embeddings (torch.nn.Module) — The word embeddings of the base transformer model.
The model to encode virtual tokens into prompt embeddings.
Attributes:
embedding (torch.nn.Embedding) — The embedding layer of the prompt embedding.
Example:
Copied
>>> from peft import PromptEmbedding, PromptTuningConfig
>>> config = PromptTuningConfig(
... peft_type="PROMPT_TUNING",
... task | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_38 | eft import PromptEmbedding, PromptTuningConfig
>>> config = PromptTuningConfig(
... peft_type="PROMPT_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... prompt_tuning_init="TEXT",
... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
... toke | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_39 | nit="TEXT",
... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
... tokenizer_name_or_path="t5-base",
... )
>>> # t5_model.shared is the word embeddings of the base model
>>> prompt_embedding = PromptEmbedding(config, t5_model.shared)
Input Shape: (batch_size, total_virtual_tokens)
Output Shape: (batch_size, total_virtual_tokens, token_dim)
IA3
class peft.IA3Config
<
source
>
(
peft_ | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_40 | tal_virtual_tokens)
Output Shape: (batch_size, total_virtual_tokens, token_dim)
IA3
class peft.IA3Config
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
target_modules: typing.Union[typing.List[str], str, NoneType] = None | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_41 | .utils.config.TaskType] = None
inference_mode: bool = False
target_modules: typing.Union[typing.List[str], str, NoneType] = None
feedforward_modules: typing.Union[typing.List[str], str, NoneType] = None
fan_in_fan_out: bool = False
modules_to_save: typing.Optional[typing.List[str]] = None
init_ia3_weights: bool = True
)
Parameters
target_modules (Union[List[str],str]) — The names of the modules to apply (IA)^3 to.
feedforward_modules (Un | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_42 |
)
Parameters
target_modules (Union[List[str],str]) — The names of the modules to apply (IA)^3 to.
feedforward_modules (Union[List[str],str]) — The names of the modules to be treated as feedforward modules
as in the original paper. —
fan_in_fan_out (bool) — Set this to True if the layer to replace stores weight like (fan_in, fan_out).
For example, gpt-2 uses Conv1D which stores weights like (fan_in, fan_out) and hence this should be | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_43 | ht like (fan_in, fan_out).
For example, gpt-2 uses Conv1D which stores weights like (fan_in, fan_out) and hence this should be set to True. —
modules_to_save (List[str]) —List of modules apart from (IA)^3 layers to be set as trainable
and saved in the final checkpoint.
init_ia3_weights (bool) — Whether to initialize the vectors in the (IA)^3 layers, defaults to True.
This is the configuration class to store the configuration of a IA3Mo | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_44 | ize the vectors in the (IA)^3 layers, defaults to True.
This is the configuration class to store the configuration of a IA3Model.
class peft.IA3Model
<
source
>
(
model
config
adapter_name
)
→
torch.nn.Module
Parameters
model (PreTrainedModel) — The model to be adapted.
config (IA3Config) — The configuration of the (IA)^3 model.
Returns
torch.nn.Module
The (IA)^3 model.
Creates a Infused Adapter by Inhibiting and Amplifying I | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_45 | of the (IA)^3 model.
Returns
torch.nn.Module
The (IA)^3 model.
Creates a Infused Adapter by Inhibiting and Amplifying Inner Activations ((IA)^3) model from a pretrained
transformers model. The method is described in detail in https://arxiv.org/abs/2205.05638
Example:
Copied
>>> from transformers import AutoModelForSeq2SeqLM, ia3Config
>>> from peft import IA3Model, IA3Config
>>> config = IA3Config(
... peft_type="IA3",
... | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_46 | oModelForSeq2SeqLM, ia3Config
>>> from peft import IA3Model, IA3Config
>>> config = IA3Config(
... peft_type="IA3",
... task_type="SEQ_2_SEQ_LM",
... target_modules=["k", "v", "w0"],
... feedforward_modules=["w0"],
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> ia3_model = IA3Model(config, model)
Attributes:
model (PreTrainedModel) — The model to be adapted.
peft_config (ia3Config): The configuration of | 1d2528daa742bd1848ac231296d2db45.txt |
1d2528daa742bd1848ac231296d2db45.txt_chunk_47 | odel(config, model)
Attributes:
model (PreTrainedModel) — The model to be adapted.
peft_config (ia3Config): The configuration of the (IA)^3 model.
merge_and_unload
<
source
>
(
)
This method merges the (IA)^3 layers into the base model. This is needed if someone wants to use the base model
as a standalone model.
| 1d2528daa742bd1848ac231296d2db45.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_1 | int8 training for automatic speech recognition
Quantization reduces the precision of floating point data types, decreasing the memory required to store model weights. However, quantization degrades inference performance because you lose information when you reduce the precision. 8-bit or int8 quantization uses only a quarter precision, but it does not degrade performance because it doesn’t just drop the bits or data. Instead, int8 quantizatio | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_2 | quarter precision, but it does not degrade performance because it doesn’t just drop the bits or data. Instead, int8 quantization rounds from one data type to another.
💡 Read the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper to learn more, or you can take a look at the corresponding blog post for a gentler introduction.
This guide will show you how to train a openai/whisper-large-v2 model for multilingual automatic spe | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_3 | for a gentler introduction.
This guide will show you how to train a openai/whisper-large-v2 model for multilingual automatic speech recognition (ASR) using a combination of int8 quantization and LoRA. You’ll train Whisper for multilingual ASR on Marathi from the Common Voice 11.0 dataset.
Before you start, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets accelerate evaluate jiwer bit | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
3348d37e0cea1a003c3eb2670c82d8c3.txt_chunk_4 | u have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets accelerate evaluate jiwer bitsandbytes
Setup
Let’s take care of some of the setup first so you can start training faster later. Set the CUDA_VISIBLE_DEVICES to 0 to use the first GPU on your machine. Then you can specify the model name (either a Hub model repository id or a path to a directory containing the model), language and language abbrev | 3348d37e0cea1a003c3eb2670c82d8c3.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.