chunk_id
stringlengths
44
45
chunk_content
stringlengths
21
448
filename
stringlengths
36
36
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_9
num_train_epochs 3 \ --gradient_accumulation_steps=1 \ --output_dir="results/peft_lora_e5_ecommerce_semantic_search_colab" \ --seed=42 \ --push_to_hub \ --hub_model_id="smangrul/peft_lora_e5_ecommerce_semantic_search_colab" \ --with_tracking \ --report_to="wandb" \ --use_peft \ --checkpointing_steps "epoch" Dataset for semantic similarity The dataset we’ll be using is a small subset of the esci-data dataset (it can be found on Hub at smangru
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_10
r semantic similarity The dataset we’ll be using is a small subset of the esci-data dataset (it can be found on Hub at smangrul/amazon_esci). Each sample contains a tuple of (query, product_title, relevance_label) where relevance_label is 1 if the product matches the intent of the query, otherwise it is 0. Our task is to build an embedding model that can retrieve semantically similar products given a product query. This is usually the first
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_11
is to build an embedding model that can retrieve semantically similar products given a product query. This is usually the first stage in building a product search engine to retrieve all the potentially relevant products of a given query. Typically, this involves using Bi-Encoder models to cross-join the query and millions of products which could blow up quickly. Instead, you can use a Transformer model to retrieve the top K nearest similar prod
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_12
ons of products which could blow up quickly. Instead, you can use a Transformer model to retrieve the top K nearest similar products for a given query by embedding the query and products in the same latent embedding space. The millions of products are embedded offline to create a search index. At run time, only the query is embedded by the model, and products are retrieved from the search index with a fast approximate nearest neighbor search li
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_13
ry is embedded by the model, and products are retrieved from the search index with a fast approximate nearest neighbor search library such as FAISS or HNSWlib. The next stage involves reranking the retrieved list of products to return the most relevant ones; this stage can utilize cross-encoder based models as the cross-join between the query and a limited set of retrieved products. The diagram below from awesome-semantic-search outlines a roug
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_14
s-join between the query and a limited set of retrieved products. The diagram below from awesome-semantic-search outlines a rough semantic search pipeline: For this task guide, we will explore the first stage of training an embedding model to predict semantically similar products given a product query. Training script deep dive We finetune e5-large-v2 which tops the MTEB benchmark using PEFT-LoRA. AutoModelForSentenceEmbedding returns the
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_15
t deep dive We finetune e5-large-v2 which tops the MTEB benchmark using PEFT-LoRA. AutoModelForSentenceEmbedding returns the query and product embeddings, and the mean_pooling function pools them across the sequence dimension and normalizes them: Copied class AutoModelForSentenceEmbedding(nn.Module): def __init__(self, model_name, tokenizer, normalize=True): super(AutoModelForSentenceEmbedding, self).__init__() self.m
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_16
it__(self, model_name, tokenizer, normalize=True): super(AutoModelForSentenceEmbedding, self).__init__() self.model = AutoModel.from_pretrained(model_name) self.normalize = normalize self.tokenizer = tokenizer def forward(self, **kwargs): model_output = self.model(**kwargs) embeddings = self.mean_pooling(model_output, kwargs["attention_mask"]) if self.normalize: embeddi
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_17
s) embeddings = self.mean_pooling(model_output, kwargs["attention_mask"]) if self.normalize: embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1) return embeddings def mean_pooling(self, model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(to
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_18
First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) def __getattr__(self, name: str): """Forward missing attributes to the wrapped module.""" try: return super().__getattr__(name) # defer
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_19
"""Forward missing attributes to the wrapped module.""" try: return super().__getattr__(name) # defer to nn.Module's logic except AttributeError: return getattr(self.model, name) def get_cosine_embeddings(query_embs, product_embs): return torch.sum(query_embs * product_embs, axis=1) def get_loss(cosine_score, labels): return torch.mean(torch.square(labels * (1 - cosine_score) + torch.cl
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_20
ct_embs, axis=1) def get_loss(cosine_score, labels): return torch.mean(torch.square(labels * (1 - cosine_score) + torch.clamp((1 - labels) * cosine_score, min=0.0))) The get_cosine_embeddings function computes the cosine similarity and the get_loss function computes the loss. The loss enables the model to learn that a cosine score of 1 for query and product pairs is relevant, and a cosine score of 0 or below is irrelevant. Define the Peft
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_21
hat a cosine score of 1 for query and product pairs is relevant, and a cosine score of 0 or below is irrelevant. Define the PeftConfig with your LoRA hyperparameters, and create a PeftModel. We use 🤗 Accelerate for handling all device management, mixed precision training, gradient accumulation, WandB tracking, and saving/loading utilities. Results The table below compares the training time, the batch size that could be fit in Colab, and the
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_22
ng/loading utilities. Results The table below compares the training time, the batch size that could be fit in Colab, and the best ROC-AUC scores between a PEFT model and a fully fine-tuned model: Training Type Training time per epoch (Hrs) Batch Size that fits ROC-AUC score (higher is better) Pre-Trained e5-large-v2 - - 0.68 PEFT 1.73 64 0.787 Full Fine-Tuning 2.33 32 0.7969 The PEFT-LoRA model trains 1.35X faster and can fit 2X batch size c
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_23
- - 0.68 PEFT 1.73 64 0.787 Full Fine-Tuning 2.33 32 0.7969 The PEFT-LoRA model trains 1.35X faster and can fit 2X batch size compared to the fully fine-tuned model, and the performance of PEFT-LoRA is comparable to the fully fine-tuned model with a relative drop of -1.24% in ROC-AUC. This gap can probably be closed with bigger models as mentioned in The Power of Scale for Parameter-Efficient Prompt Tuning . Inference Let’s go! Now we have
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_24
ith bigger models as mentioned in The Power of Scale for Parameter-Efficient Prompt Tuning . Inference Let’s go! Now we have the model, we need to create a search index of all the products in our catalog. Please refer to peft_lora_embedding_semantic_similarity_inference.ipynb for the complete inference code. Get a list of ids to products which we can call ids_to_products_dict: Copied {0: 'RamPro 10" All Purpose Utility Air Tires/Wheels w
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_25
list of ids to products which we can call ids_to_products_dict: Copied {0: 'RamPro 10" All Purpose Utility Air Tires/Wheels with a 5/8" Diameter Hole with Double Sealed Bearings (Pack of 2)', 1: 'MaxAuto 2-Pack 13x5.00-6 2PLY Turf Mower Tractor Tire with Yellow Rim, (3" Centered Hub, 3/4" Bushings )', 2: 'NEIKO 20601A 14.5 inch Steel Tire Spoon Lever Iron Tool Kit | Professional Tire Changing Tool for Motorcycle, Dirt Bike, Lawn Mower | 3
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_26
601A 14.5 inch Steel Tire Spoon Lever Iron Tool Kit | Professional Tire Changing Tool for Motorcycle, Dirt Bike, Lawn Mower | 3 pcs Tire Spoons | 3 Rim Protector | Valve Tool | 6 Valve Cores', 3: '2PK 13x5.00-6 13x5.00x6 13x5x6 13x5-6 2PLY Turf Mower Tractor Tire with Gray Rim', 4: '(Set of 2) 15x6.00-6 Husqvarna/Poulan Tire Wheel Assy .75" Bearing', 5: 'MaxAuto 2 Pcs 16x6.50-8 Lawn Mower Tire for Garden Tractors Ridings, 4PR, Tubeless', 6:
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_27
lan Tire Wheel Assy .75" Bearing', 5: 'MaxAuto 2 Pcs 16x6.50-8 Lawn Mower Tire for Garden Tractors Ridings, 4PR, Tubeless', 6: 'Dr.Roc Tire Spoon Lever Dirt Bike Lawn Mower Motorcycle Tire Changing Tools with Durable Bag 3 Tire Irons 2 Rim Protectors 1 Valve Stems Set TR412 TR413', 7: 'MARASTAR 21446-2PK 15x6.00-6" Front Tire Assembly Replacement-Craftsman Mower, Pack of 2', 8: '15x6.00-6" Front Tire Assembly Replacement for 100 and 300 Ser
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_28
Front Tire Assembly Replacement-Craftsman Mower, Pack of 2', 8: '15x6.00-6" Front Tire Assembly Replacement for 100 and 300 Series John Deere Riding Mowers - 2 pack', 9: 'Honda HRR Wheel Kit (2 Front 44710-VL0-L02ZB, 2 Back 42710-VE2-M02ZE)', 10: 'Honda 42710-VE2-M02ZE (Replaces 42710-VE2-M01ZE) Lawn Mower Rear Wheel Set of 2' ... Use the trained smangrul/peft_lora_e5_ecommerce_semantic_search_colab model to get the product embeddings: Co
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_29
l Set of 2' ... Use the trained smangrul/peft_lora_e5_ecommerce_semantic_search_colab model to get the product embeddings: Copied # base model model = AutoModelForSentenceEmbedding(model_name_or_path, tokenizer) # peft config and wrapping model = PeftModel.from_pretrained(model, peft_model_id) device = "cuda" model.to(device) model.eval() model = model.merge_and_unload() import numpy as np num_products= len(dataset) d = 1024 product_embe
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_30
l.to(device) model.eval() model = model.merge_and_unload() import numpy as np num_products= len(dataset) d = 1024 product_embeddings_array = np.zeros((num_products, d)) for step, batch in enumerate(tqdm(dataloader)): with torch.no_grad(): with torch.amp.autocast(dtype=torch.bfloat16, device_type="cuda"): product_embs = model(**{k:v.to(device) for k, v in batch.items()}).detach().float().cpu() start_index = step*bat
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_31
product_embs = model(**{k:v.to(device) for k, v in batch.items()}).detach().float().cpu() start_index = step*batch_size end_index = start_index+batch_size if (start_index+batch_size) < num_products else num_products product_embeddings_array[start_index:end_index] = product_embs del product_embs, batch Create a search index using HNSWlib: Copied def construct_search_index(dim, num_elements, data): # Declaring
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_32
embs, batch Create a search index using HNSWlib: Copied def construct_search_index(dim, num_elements, data): # Declaring index search_index = hnswlib.Index(space = 'ip', dim = dim) # possible options are l2, cosine or ip # Initializing index - the maximum number of elements should be known beforehand search_index.init_index(max_elements = num_elements, ef_construction = 200, M = 100) # Element insertion (can be call
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_33
d search_index.init_index(max_elements = num_elements, ef_construction = 200, M = 100) # Element insertion (can be called several times): ids = np.arange(num_elements) search_index.add_items(data, ids) return search_index product_search_index = construct_search_index(d, num_products, product_embeddings_array) Get the query embeddings and nearest neighbors: Copied def get_query_embeddings(query, model, tokenizer, device
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_34
ddings_array) Get the query embeddings and nearest neighbors: Copied def get_query_embeddings(query, model, tokenizer, device): inputs = tokenizer(query, padding="max_length", max_length=70, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): query_embs = model(**{k:v.to(device) for k, v in inputs.items()}).detach().cpu() return query_embs[0] def get_nearest_neighbours(k, search_index, query
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_35
ce) for k, v in inputs.items()}).detach().cpu() return query_embs[0] def get_nearest_neighbours(k, search_index, query_embeddings, ids_to_products_dict, threshold=0.7): # Controlling the recall by setting ef: search_index.set_ef(100) # ef should always be > k # Query dataset, k - number of the closest elements (returns 2 numpy arrays) labels, distances = search_index.knn_query(query_embeddings, k = k) return
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_36
osest elements (returns 2 numpy arrays) labels, distances = search_index.knn_query(query_embeddings, k = k) return [(ids_to_products_dict[label], (1-distance)) for label, distance in zip(labels[0], distances[0]) if (1-distance)>=threshold] Let’s test it out with the query deep learning books: Copied query = "deep learning books" k = 10 query_embeddings = get_query_embeddings(query, model, tokenizer, device) search_results = get_
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_37
ry = "deep learning books" k = 10 query_embeddings = get_query_embeddings(query, model, tokenizer, device) search_results = get_nearest_neighbours(k, product_search_index, query_embeddings, ids_to_products_dict, threshold=0.7) print(f"{query=}") for product, cosine_sim_score in search_results: print(f"cosine_sim_score={round(cosine_sim_score,2)} {product=}") Output: Copied query='deep learning books' cosine_sim_score=0.95 product='Deep
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_38
score={round(cosine_sim_score,2)} {product=}") Output: Copied query='deep learning books' cosine_sim_score=0.95 product='Deep Learning (The MIT Press Essential Knowledge series)' cosine_sim_score=0.93 product='Practical Deep Learning: A Python-Based Introduction' cosine_sim_score=0.9 product='Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems' cosine_sim_score=0.9 product=
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_39
ng with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems' cosine_sim_score=0.9 product='Machine Learning: A Hands-On, Project-Based Introduction to Machine Learning for Absolute Beginners: Mastering Engineering ML Systems using Scikit-Learn and TensorFlow' cosine_sim_score=0.9 product='Mastering Machine Learning on AWS: Advanced machine learning in Python using SageMaker, Apache Spark, and TensorFlow' co
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_40
roduct='Mastering Machine Learning on AWS: Advanced machine learning in Python using SageMaker, Apache Spark, and TensorFlow' cosine_sim_score=0.9 product='The Hundred-Page Machine Learning Book' cosine_sim_score=0.89 product='Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems' cosine_sim_score=0.89 product='Machine Learning: A Journey from Beginner to Advanced Includ
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_41
niques to Build Intelligent Systems' cosine_sim_score=0.89 product='Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow' cosine_sim_score=0.88 product='Mastering Machine Learning with scikit-learn' cosine_sim_score=0.88 product='Mastering Machine Learning with scikit-learn - Second Edition: Apply effective learning algorithms to real-world problems using scikit-learn' Books on deep learning
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_42
it-learn - Second Edition: Apply effective learning algorithms to real-world problems using scikit-learn' Books on deep learning and machine learning are retrieved even though machine learning wasn’t included in the query. This means the model has learned that these books are semantically relevant to the query based on the purchase behavior of customers on Amazon. The next steps would ideally involve using ONNX/TensorRT to optimize the model a
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_43
the purchase behavior of customers on Amazon. The next steps would ideally involve using ONNX/TensorRT to optimize the model and using a Triton server to host it. Check out 🤗 Optimum for related optimizations for efficient serving!
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_1
IA3 This conceptual guide gives a brief overview of IA3, a parameter-efficient fine tuning technique that is intended to improve over LoRA. To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_2
re injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller. Being similar to LoRA, IA3 carries many of the same advantages: IA3 makes
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_3
eps the number of trainable parameters much smaller. Being similar to LoRA, IA3 carries many of the same advantages: IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%) The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstr
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_4
l pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them. Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models. IA3 does not add any inference latency because adapter weights can be merged with the base model. In principle, IA3 can be applied to any subset of weight matrices in a neural network
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_5
eights can be merged with the base model. In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Following the authors’ implementation, IA3 weights are added to the key, value and feedforward layers of a Transformer model. Given the target layers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_6
ers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices. Common IA3 parameters in PEFT As with other methods supported by PEFT, to fine-tune a model using IA3, you need to: Instantiate a base model. Create a configuration (IA3Config) where you define IA3-specific parameters. Wrap the base model with get_peft_model() to get a trainable PeftModel. Train the PeftModel as y
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_7
define IA3-specific parameters. Wrap the base model with get_peft_model() to get a trainable PeftModel. Train the PeftModel as you normally would train the base model. IA3Config allows you to control how IA3 is applied to the base model through the following parameters: target_modules: The modules (for example, attention blocks) to apply the IA3 vectors. feedforward_modules: The list of modules to be treated as feedforward layers in target_mod
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_8
ion blocks) to apply the IA3 vectors. feedforward_modules: The list of modules to be treated as feedforward layers in target_modules. While learned vectors are multiplied with the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. modules_to_save: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom he
464c425961bdab9a2c94aa432413e46b.txt
464c425961bdab9a2c94aa432413e46b.txt_chunk_9
odules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.
464c425961bdab9a2c94aa432413e46b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_1
Prompt tuning for causal language modeling Prompting helps guide language model behavior by adding some input text specific to a task. Prompt tuning is an additive method for only training and updating the newly added prompt tokens to a pretrained model. This way, you can use one pretrained model whose weights are frozen, and train and update a smaller set of prompt parameters for each downstream task instead of fully finetuning a
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_2
ghts are frozen, and train and update a smaller set of prompt parameters for each downstream task instead of fully finetuning a separate model. As models grow larger and larger, prompt tuning can be more efficient, and results are even better as model parameters scale. 💡 Read The Power of Scale for Parameter-Efficient Prompt Tuning to learn more about prompt tuning. This guide will show you how to apply prompt tuning to train a bloomz-560m mode
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_3
Prompt Tuning to learn more about prompt tuning. This guide will show you how to apply prompt tuning to train a bloomz-560m model on the twitter_complaints subset of the RAFT dataset. Before you begin, make sure you have all the necessary libraries installed: Copied !pip install -q peft transformers datasets Setup Start by defining the model and tokenizer, the dataset and the dataset columns to train on, some training hyperparameters, and
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_4
Start by defining the model and tokenizer, the dataset and the dataset columns to train on, some training hyperparameters, and the PromptTuningConfig. The PromptTuningConfig contains information about the task type, the text to initialize the prompt embedding, the number of virtual tokens, and the tokenizer to use: Copied from transformers import AutoModelForCausalLM, AutoTokenizer, default_data_collator, get_linear_schedule_with_warmup fr
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_5
Copied from transformers import AutoModelForCausalLM, AutoTokenizer, default_data_collator, get_linear_schedule_with_warmup from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType import torch from datasets import load_dataset import os from torch.utils.data import DataLoader from tqdm import tqdm device = "cuda" model_name_or_path = "bigscience/bloomz-560m" tokenizer_name_or_path = "bigscie
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_6
ataLoader from tqdm import tqdm device = "cuda" model_name_or_path = "bigscience/bloomz-560m" tokenizer_name_or_path = "bigscience/bloomz-560m" peft_config = PromptTuningConfig( task_type=TaskType.CAUSAL_LM, prompt_tuning_init=PromptTuningInit.TEXT, num_virtual_tokens=8, prompt_tuning_init_text="Classify if the tweet is a complaint or not:", tokenizer_name_or_path=model_name_or_path, ) dataset_name = "twitter_complaints" c
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_7
fy if the tweet is a complaint or not:", tokenizer_name_or_path=model_name_or_path, ) dataset_name = "twitter_complaints" checkpoint_name = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}_v1.pt".replace( "/", "_" ) text_column = "Tweet text" label_column = "text_label" max_length = 64 lr = 3e-2 num_epochs = 50 batch_size = 8 Load dataset For this guide, you’ll load the twitter_complaints subset
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_8
_length = 64 lr = 3e-2 num_epochs = 50 batch_size = 8 Load dataset For this guide, you’ll load the twitter_complaints subset of the RAFT dataset. This subset contains tweets that are labeled either complaint or no complaint: Copied dataset = load_dataset("ought/raft", dataset_name) dataset["train"][0] {"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2} To make the Label column more readable, replace the Label val
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_9
t": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2} To make the Label column more readable, replace the Label value with the corresponding label text and store them in a text_label column. You can use the map function to apply this change over the entire dataset in one step: Copied classes = [k.replace("_", " ") for k in dataset["train"].features["Label"].names] dataset = dataset.map( lambda x: {"text_label": [classes[labe
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_10
e("_", " ") for k in dataset["train"].features["Label"].names] dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["Label"]]}, batched=True, num_proc=1, ) dataset["train"][0] {"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2, "text_label": "no complaint"} Preprocess dataset Next, you’ll setup a tokenizer; configure the appropriate padding token to use for padding sequences, an
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_11
} Preprocess dataset Next, you’ll setup a tokenizer; configure the appropriate padding token to use for padding sequences, and determine the maximum length of the tokenized labels: Copied tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) if tokenizer.pad_token_id is None: tokenizer.pad_token_id = tokenizer.eos_token_id target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes]) print(targ
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_12
tokenizer.eos_token_id target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes]) print(target_max_length) 3 Create a preprocess_function to: Tokenize the input text and labels. For each example in a batch, pad the labels with the tokenizers pad_token_id. Concatenate the input text and labels into the model_inputs. Create a separate attention mask for labels and model_inputs. Loop through each example in the
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_13
nd labels into the model_inputs. Create a separate attention mask for labels and model_inputs. Loop through each example in the batch again to pad the input ids, labels, and attention mask to the max_length and convert them to PyTorch tensors. Copied def preprocess_function(examples): batch_size = len(examples[text_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] targets = [str(x) for x in exampl
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_14
ext_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] targets = [str(x) for x in examples[label_column]] model_inputs = tokenizer(inputs) labels = tokenizer(targets) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] + [tokenizer.pad_token_id] # print(i, sample_input_ids, label_input_ids) model_i
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_15
ut_ids = labels["input_ids"][i] + [tokenizer.pad_token_id] # print(i, sample_input_ids, label_input_ids) model_inputs["input_ids"][i] = sample_input_ids + label_input_ids labels["input_ids"][i] = [-100] * len(sample_input_ids) + label_input_ids model_inputs["attention_mask"][i] = [1] * len(model_inputs["input_ids"][i]) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inpu
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_16
en(model_inputs["input_ids"][i]) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_17
+ sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_18
inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length]) labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length]) model_inputs["labels"] = labels["input_ids"] return model_inputs Use the map function to apply the preprocess_function to the entire dataset. You can remove the unprocessed columns since the model won’t need them:
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_19
o apply the preprocess_function to the entire dataset. You can remove the unprocessed columns since the model won’t need them: Copied processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the dat
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_20
="Running tokenizer on dataset", ) Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU. Copied train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["test"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True )
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_21
oader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) Train You’re almost ready to setup your model and start training! Initialize a base model from AutoModelForCausalLM, and pass it and peft_config to the get_peft_model() function to create a PeftModel.
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_22
lize a base model from AutoModelForCausalLM, and pass it and peft_config to the get_peft_model() function to create a PeftModel. You can print the new PeftModel’s trainable parameters to see how much more efficient it is than training the full parameters of the original model! Copied model = AutoModelForCausalLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) print(model.print_trainable_parameters()) "trainable
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_23
m_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) print(model.print_trainable_parameters()) "trainable params: 8192 || all params: 559222784 || trainable%: 0.0014648902430985358" Setup an optimizer and learning rate scheduler: Copied optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(tra
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_24
lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) Move the model to the GPU, then write a training loop to start training! Copied model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.i
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_25
total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_d
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_26
optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu()
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_27
s.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_e
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_28
taloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") Share model You can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted: Copied from huggingface_hub import notebook_login notebook_login() Use the push_to_hub function to upload your model to a model repository on the Hub:
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_29
import notebook_login notebook_login() Use the push_to_hub function to upload your model to a model repository on the Hub: Copied peft_model_id = "your-name/bloomz-560m_PROMPT_TUNING_CAUSAL_LM" model.push_to_hub("your-name/bloomz-560m_PROMPT_TUNING_CAUSAL_LM", use_auth_token=True) Once the model is uploaded, you’ll see the model file size is only 33.5kB! 🤏 Inference Let’s try the model on a sample input for inference. If you look at the
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_30
l see the model file size is only 33.5kB! 🤏 Inference Let’s try the model on a sample input for inference. If you look at the repository you uploaded the model to, you’ll see a adapter_config.json file. Load this file into PeftConfig to specify the peft_type and task_type. Then you can load the prompt tuned model weights, and the configuration into from_pretrained() to create the PeftModel: Copied from peft import PeftModel, PeftConfig p
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_31
ights, and the configuration into from_pretrained() to create the PeftModel: Copied from peft import PeftModel, PeftConfig peft_model_id = "stevhliu/bloomz-560m_PROMPT_TUNING_CAUSAL_LM" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) Grab a tweet and tokenize it: Copied inputs = tokenizer( f'{text_c
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_32
odel = PeftModel.from_pretrained(model, peft_model_id) Grab a tweet and tokenize it: Copied inputs = tokenizer( f'{text_column} : {"@nationalgridus I have no water and the bill is current and paid. Can you do something about this?"} Label : ', return_tensors="pt", ) Put the model on a GPU and generate the predicted label: Copied model.to(device) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} o
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_33
edicted label: Copied model.to(device) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3 ) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) [ "Tweet text : @nationalgridus I have no water and the bill is current and pai
1f28fac0ee53a9a0af0f561a85cae02b.txt
1f28fac0ee53a9a0af0f561a85cae02b.txt_chunk_34
().cpu().numpy(), skip_special_tokens=True)) [ "Tweet text : @nationalgridus I have no water and the bill is current and paid. Can you do something about this? Label : complaint" ]
1f28fac0ee53a9a0af0f561a85cae02b.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_1
Working with custom models Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like LoRA - are not restricted to specific model types. In this guide, we will see how LoRA can be applied to a multilayer perception and a computer vision model from the timm library. Multilayer perceptron Let’s assu
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_2
can be applied to a multilayer perception and a computer vision model from the timm library. Multilayer perceptron Let’s assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition: Copied from torch import nn class MLP(nn.Module): def __init__(self, num_units_hidden=2000): super().__init__() self.seq = nn.Sequential( nn.Linear(20, num_units_hidden), nn.ReLU(),
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_3
super().__init__() self.seq = nn.Sequential( nn.Linear(20, num_units_hidden), nn.ReLU(), nn.Linear(num_units_hidden, num_units_hidden), nn.ReLU(), nn.Linear(num_units_hidden, 2), nn.LogSoftmax(dim=-1), ) def forward(self, X): return self.seq(X) This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an outp
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_4
X): return self.seq(X) This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer. For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains from PEFT, but those gains are in line with more realistic examples. There are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers models, PEFT wi
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_5
ere are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers. To determine the names of the layers to tune: Copied print([(n, type(m)) for n, m in MLP().named_modules()]) This should print: Copied [('', __main__.MLP), ('seq', torch.nn.modules.container.Sequential), ('se
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_6
MLP().named_modules()]) This should print: Copied [('', __main__.MLP), ('seq', torch.nn.modules.container.Sequential), ('seq.0', torch.nn.modules.linear.Linear), ('seq.1', torch.nn.modules.activation.ReLU), ('seq.2', torch.nn.modules.linear.Linear), ('seq.3', torch.nn.modules.activation.ReLU), ('seq.4', torch.nn.modules.linear.Linear), ('seq.5', torch.nn.modules.activation.LogSoftmax)] Let’s say we want to apply LoRA to the input laye
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_7
nn.modules.linear.Linear), ('seq.5', torch.nn.modules.activation.LogSoftmax)] Let’s say we want to apply LoRA to the input layer and to the hidden layer, those are 'seq.0' and 'seq.1'. Moreover, let’s assume we want to update the output layer without LoRA, that would be 'seq.4'. The corresponding config would be: Copied from peft import LoraConfig config = LoraConfig( target_modules=["seq.0", "seq.2"], modules_to_save=["seq.4"], )
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_8
opied from peft import LoraConfig config = LoraConfig( target_modules=["seq.0", "seq.2"], modules_to_save=["seq.4"], ) With that, we can create our PEFT model and check the fraction of parameters trained: Copied from peft import get_peft_model model = MLP() peft_model = get_peft_model(module, config) peft_model.print_trainable_parameters() # prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922 F
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_9
model.print_trainable_parameters() # prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922 Finally, we can use any training framework we like, or write our own fit loop, to train the peft_model. For a complete example, check out this notebook. timm model The timm library contains a large number of pretrained computer vision models. Those can also be fine-tuned with PEFT. Let’s check out how this works in p
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_10
a large number of pretrained computer vision models. Those can also be fine-tuned with PEFT. Let’s check out how this works in practice. To start, ensure that timm is installed in the Python environment: Copied python -m pip install -U timm Next we load a timm model for an image classification task: Copied import timm num_classes = ... model_id = "timm/poolformer_m36.sail_in1k" model = timm.create_model(model_id, pretrained=True, num_cla
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_11
timm num_classes = ... model_id = "timm/poolformer_m36.sail_in1k" model = timm.create_model(model_id, pretrained=True, num_classes=num_classes) Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of those layers, let’s look at all the layer names: Copied print([(n,
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_12
apply LoRA to the 2D conv layers. To identify the names of those layers, let’s look at all the layer names: Copied print([(n, type(m)) for n, m in MLP().named_modules()]) This will print a very long list, we’ll only show the first few: Copied [('', timm.models.metaformer.MetaFormer), ('stem', timm.models.metaformer.Stem), ('stem.conv', torch.nn.modules.conv.Conv2d), ('stem.norm', torch.nn.modules.linear.Identity), ('stages', torch.nn.
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_13
mer.Stem), ('stem.conv', torch.nn.modules.conv.Conv2d), ('stem.norm', torch.nn.modules.linear.Identity), ('stages', torch.nn.modules.container.Sequential), ('stages.0', timm.models.metaformer.MetaFormerStage), ('stages.0.downsample', torch.nn.modules.linear.Identity), ('stages.0.blocks', torch.nn.modules.container.Sequential), ('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock), ('stages.0.blocks.0.norm1', timm.layers.norm.Gro
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_14
r.Sequential), ('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock), ('stages.0.blocks.0.norm1', timm.layers.norm.GroupNorm1), ('stages.0.blocks.0.token_mixer', timm.models.metaformer.Pooling), ('stages.0.blocks.0.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d), ('stages.0.blocks.0.drop_path1', torch.nn.modules.linear.Identity), ('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale), ('stages.0.blocks.0.res_scal
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_15
ch.nn.modules.linear.Identity), ('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale), ('stages.0.blocks.0.res_scale1', torch.nn.modules.linear.Identity), ('stages.0.blocks.0.norm2', timm.layers.norm.GroupNorm1), ('stages.0.blocks.0.mlp', timm.layers.mlp.Mlp), ('stages.0.blocks.0.mlp.fc1', torch.nn.modules.conv.Conv2d), ('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU), ('stages.0.blocks.0.mlp.drop1', torch.nn.mo
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_16
les.conv.Conv2d), ('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU), ('stages.0.blocks.0.mlp.drop1', torch.nn.modules.dropout.Dropout), ('stages.0.blocks.0.mlp.norm', torch.nn.modules.linear.Identity), ('stages.0.blocks.0.mlp.fc2', torch.nn.modules.conv.Conv2d), ('stages.0.blocks.0.mlp.drop2', torch.nn.modules.dropout.Dropout), ('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity), ('stages.0.blocks.0.layer_sca
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_17
nn.modules.dropout.Dropout), ('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity), ('stages.0.blocks.0.layer_scale2', timm.models.metaformer.Scale), ('stages.0.blocks.0.res_scale2', torch.nn.modules.linear.Identity), ('stages.0.blocks.1', timm.models.metaformer.MetaFormerBlock), ('stages.0.blocks.1.norm1', timm.layers.norm.GroupNorm1), ('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling), ('stages.0.blocks.1.to
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_18
orm1', timm.layers.norm.GroupNorm1), ('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling), ('stages.0.blocks.1.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d), ... ('head.global_pool.flatten', torch.nn.modules.linear.Identity), ('head.norm', timm.layers.norm.LayerNorm2d), ('head.flatten', torch.nn.modules.flatten.Flatten), ('head.drop', torch.nn.modules.linear.Identity), ('head.fc', torch.nn.modules.linear.Linear)]
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_19
h.nn.modules.flatten.Flatten), ('head.drop', torch.nn.modules.linear.Identity), ('head.fc', torch.nn.modules.linear.Linear)] ] Upon closer inspection, we see that the 2D conv layers have names such as "stages.0.blocks.0.mlp.fc1" and "stages.0.blocks.0.mlp.fc2". How can we match those layer names specifically? You can write a regular expressions to match the layer names. For our case, the regex r".*\.mlp\.fc\d" should do the job. Furthermore,
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_20
n write a regular expressions to match the layer names. For our case, the regex r".*\.mlp\.fc\d" should do the job. Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is also updated. Looking at the end of the list printed above, we can see that it’s named 'head.fc'. With that in mind, here is our LoRA config: Copied config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_21
d 'head.fc'. With that in mind, here is our LoRA config: Copied config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"]) Then we only need to create the PEFT model by passing our base model and the config to get_peft_model: Copied peft_model = get_peft_model(model, config) peft_model.print_trainable_parameters() # prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876 This sho
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt_chunk_22
t_trainable_parameters() # prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876 This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain. For a complete example, check out this notebook.
f7ab4b8c7f9868b90a0abbe1f70c3a10.txt