chunk_id
stringlengths
44
45
chunk_content
stringlengths
21
448
filename
stringlengths
36
36
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_2
tead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters. This quicktour will show you 🤗 PEFT’s main features and help you train large pretrained models that would typically be inaccessible on consumer devices. You’ll see how to train the 1.2B parameter bigscience/mt0-large model with LoRA to generate a classification
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_3
n consumer devices. You’ll see how to train the 1.2B parameter bigscience/mt0-large model with LoRA to generate a classification label and use it for inference. PeftConfig Each 🤗 PEFT method is defined by a PeftConfig class that stores all the important parameters for building a PeftModel. Because you’re going to use LoRA, you’ll need to load and create a LoraConfig class. Within LoraConfig, specify the following parameters: the task_type,
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_4
se LoRA, you’ll need to load and create a LoraConfig class. Within LoraConfig, specify the following parameters: the task_type, or sequence-to-sequence language modeling in this case inference_mode, whether you’re using the model for inference or not r, the dimension of the low-rank matrices lora_alpha, the scaling factor for the low-rank matrices lora_dropout, the dropout probability of the LoRA layers Copied from peft import LoraConfig, Ta
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_5
tor for the low-rank matrices lora_dropout, the dropout probability of the LoRA layers Copied from peft import LoraConfig, TaskType peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1) 💡 See the LoraConfig reference for more details about other parameters you can adjust. PeftModel A PeftModel is created by the get_peft_model() function. It takes a base model - which you can
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_6
ters you can adjust. PeftModel A PeftModel is created by the get_peft_model() function. It takes a base model - which you can load from the 🤗 Transformers library - and the PeftConfig containing the instructions for how to configure a model for a specific 🤗 PEFT method. Start by loading the base model you want to finetune. Copied from transformers import AutoModelForSeq2SeqLM model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_7
netune. Copied from transformers import AutoModelForSeq2SeqLM model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_path = "bigscience/mt0-large" model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) Wrap your base model and peft_config with the get_peft_model function to create a PeftModel. To get a sense of the number of trainable parameters in your model, use the print_trainable_parameters method. In this case, you’re
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_8
get a sense of the number of trainable parameters in your model, use the print_trainable_parameters method. In this case, you’re only training 0.19% of the model’s parameters! 🤏 Copied from peft import get_peft_model model = get_peft_model(model, peft_config) model.print_trainable_parameters() "output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" That is it 🎉! Now you can train the model using the
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_9
rams: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" That is it 🎉! Now you can train the model using the 🤗 Transformers Trainer, 🤗 Accelerate, or any custom PyTorch training loop. Save and load a model After your model is finished training, you can save your model to a directory using the save_pretrained function. You can also save your model to the Hub (make sure you log in to your Hugging Face account first) with the
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_10
_pretrained function. You can also save your model to the Hub (make sure you log in to your Hugging Face account first) with the push_to_hub function. Copied model.save_pretrained("output_dir") # if pushing to Hub from huggingface_hub import notebook_login notebook_login() model.push_to_hub("my_awesome_peft_model") This only saves the incremental 🤗 PEFT weights that were trained, meaning it is super efficient to store, transfer, and load.
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_11
) This only saves the incremental 🤗 PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this bigscience/T0_3B model trained with LoRA on the twitter_complaints subset of the RAFT dataset only contains two files: adapter_config.json and adapter_model.bin. The latter file is just 19MB! Easily load your model for inference using the from_pretrained function: Copied from transformers import
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_12
ile is just 19MB! Easily load your model for inference using the from_pretrained function: Copied from transformers import AutoModelForSeq2SeqLM + from peft import PeftModel, PeftConfig + peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM" + config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) + model = PeftModel.from_pretrained(mode
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_13
del_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) + model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model = model.to(device) model.eval() inputs = tokenizer("Tweet text : @HondaCustSvc Your customer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt")
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_14
ustomer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) 'complaint' Easy loading with Auto classes If you have saved your adapter locally or on the Hub, yo
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_15
ecial_tokens=True)[0]) 'complaint' Easy loading with Auto classes If you have saved your adapter locally or on the Hub, you can leverage the AutoPeftModelForxxx classes and load any PEFT model with a single line of code: Copied - from peft import PeftConfig, PeftModel - from transformers import AutoModelForCausalLM + from peft import AutoPeftModelForCausalLM - peft_config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora") - base_
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_16
salLM + from peft import AutoPeftModelForCausalLM - peft_config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora") - base_model_path = peft_config.base_model_name_or_path - transformers_model = AutoModelForCausalLM.from_pretrained(base_model_path) - peft_model = PeftModel.from_pretrained(transformers_model, peft_config) + peft_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora") Currently, supported auto classes are:
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_17
config) + peft_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora") Currently, supported auto classes are: AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification, AutoPeftModelForSeq2SeqLM, AutoPeftModelForTokenClassification, AutoPeftModelForQuestionAnswering and AutoPeftModelForFeatureExtraction. For other tasks (e.g. Whisper, StableDiffusion), you can load the model with: Copied - from peft import PeftModel
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_18
Extraction. For other tasks (e.g. Whisper, StableDiffusion), you can load the model with: Copied - from peft import PeftModel, PeftConfig, AutoPeftModel + from peft import AutoPeftModel - from transformers import WhisperForConditionalGeneration - model_id = "smangrul/openai-whisper-large-v2-LORA-colab" peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab" - peft_config = PeftConfig.from_pretrained(peft_model_id) - model = WhisperFo
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_19
id = "smangrul/openai-whisper-large-v2-LORA-colab" - peft_config = PeftConfig.from_pretrained(peft_model_id) - model = WhisperForConditionalGeneration.from_pretrained( - peft_config.base_model_name_or_path, load_in_8bit=True, device_map="auto" - ) - model = PeftModel.from_pretrained(model, peft_model_id) + model = AutoPeftModel.from_pretrained(peft_model_id) Next steps Now that you’ve seen how to train a model with one of the 🤗 PEFT meth
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_20
AutoPeftModel.from_pretrained(peft_model_id) Next steps Now that you’ve seen how to train a model with one of the 🤗 PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in this quickstart; prepare a PeftConfig for a 🤗 PEFT method, and use the get_peft_model to create a PeftModel from the configuration and base model. Then you can train it however you like! Feel f
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_21
e the get_peft_model to create a PeftModel from the configuration and base model. Then you can train it however you like! Feel free to also take a look at the task guides if you’re interested in training a model with a 🤗 PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, and token classification.
da3a9a59e28032fcc1512bc893e65de2.txt
da3a9a59e28032fcc1512bc893e65de2.txt_chunk_22
ition, DreamBooth, and token classification.
da3a9a59e28032fcc1512bc893e65de2.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_1
IA3 This conceptual guide gives a brief overview of IA3, a parameter-efficient fine tuning technique that is intended to improve over LoRA. To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_2
re injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller. Being similar to LoRA, IA3 carries many of the same advantages: IA3 makes
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_3
eps the number of trainable parameters much smaller. Being similar to LoRA, IA3 carries many of the same advantages: IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%) The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstr
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_4
l pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them. Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models. IA3 does not add any inference latency because adapter weights can be merged with the base model. In principle, IA3 can be applied to any subset of weight matrices in a neural network
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_5
eights can be merged with the base model. In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Following the authors’ implementation, IA3 weights are added to the key, value and feedforward layers of a Transformer model. Given the target layers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_6
ers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices. Common IA3 parameters in PEFT As with other methods supported by PEFT, to fine-tune a model using IA3, you need to: Instantiate a base model. Create a configuration (IA3Config) where you define IA3-specific parameters. Wrap the base model with get_peft_model() to get a trainable PeftModel. Train the PeftModel as y
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_7
define IA3-specific parameters. Wrap the base model with get_peft_model() to get a trainable PeftModel. Train the PeftModel as you normally would train the base model. IA3Config allows you to control how IA3 is applied to the base model through the following parameters: target_modules: The modules (for example, attention blocks) to apply the IA3 vectors. feedforward_modules: The list of modules to be treated as feedforward layers in target_mod
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_8
ion blocks) to apply the IA3 vectors. feedforward_modules: The list of modules to be treated as feedforward layers in target_modules. While learned vectors are multiplied with the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. modules_to_save: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom he
4758dc91003d8b9b06510eec1d5bc44e.txt
4758dc91003d8b9b06510eec1d5bc44e.txt_chunk_9
odules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.
4758dc91003d8b9b06510eec1d5bc44e.txt
9b4b2a877f961d4a993eb4bed333e971.txt_chunk_1
Installation Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 PEFT. 🤗 PEFT is tested on Python 3.8+. 🤗 PEFT is available on PyPI, as well as GitHub: PyPI To install 🤗 PEFT from PyPI: Copied pip install peft Source New features that haven’t been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository: C
9b4b2a877f961d4a993eb4bed333e971.txt
9b4b2a877f961d4a993eb4bed333e971.txt_chunk_2
ased yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository: Copied pip install git+https://github.com/huggingface/peft If you’re working on contributing to the library or wish to play with the source code and see live results as you run the code, an editable version can be installed from a locally-cloned version of the repository: Copied git clone https://github.com/huggingfa
9b4b2a877f961d4a993eb4bed333e971.txt
9b4b2a877f961d4a993eb4bed333e971.txt_chunk_3
table version can be installed from a locally-cloned version of the repository: Copied git clone https://github.com/huggingface/peft cd peft pip install -e .
9b4b2a877f961d4a993eb4bed333e971.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_1
P-tuning for sequence classification It is challenging to finetune large language models for downstream tasks because they have so many parameters. To work around this, you can use prompts to steer the model toward a particular downstream task without fully finetuning a model. Typically, these prompts are handcrafted, which may be impractical because you need very large validation sets to find the best prompts. P-tuning is a method for automa
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_2
, which may be impractical because you need very large validation sets to find the best prompts. P-tuning is a method for automatically searching and optimizing for better prompts in a continuous space. 💡 Read GPT Understands, Too to learn more about p-tuning. This guide will show you how to train a roberta-large model (but you can also use any of the GPT, OPT, or BLOOM models) with p-tuning on the mrpc configuration of the GLUE benchmark. Befo
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_3
(but you can also use any of the GPT, OPT, or BLOOM models) with p-tuning on the mrpc configuration of the GLUE benchmark. Before you begin, make sure you have all the necessary libraries installed: Copied !pip install -q peft transformers datasets evaluate Setup To get started, import 🤗 Transformers to create the base model, 🤗 Datasets to load a dataset, 🤗 Evaluate to load an evaluation metric, and 🤗 PEFT to create a PeftModel and setup
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_4
he base model, 🤗 Datasets to load a dataset, 🤗 Evaluate to load an evaluation metric, and 🤗 PEFT to create a PeftModel and setup the configuration for p-tuning. Define the model, dataset, and some basic training hyperparameters: Copied from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, TrainingArguments, Trainer, ) from peft import ( get_peft_config, get_peft_mod
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_5
zer, DataCollatorWithPadding, TrainingArguments, Trainer, ) from peft import ( get_peft_config, get_peft_model, get_peft_model_state_dict, set_peft_model_state_dict, PeftType, PromptEncoderConfig, ) from datasets import load_dataset import evaluate import torch model_name_or_path = "roberta-large" task = "mrpc" num_epochs = 20 lr = 1e-3 batch_size = 32 Load dataset and metric Next, load the mrpc configura
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_6
"roberta-large" task = "mrpc" num_epochs = 20 lr = 1e-3 batch_size = 32 Load dataset and metric Next, load the mrpc configuration - a corpus of sentence pairs labeled according to whether they’re semantically equivalent or not - from the GLUE benchmark: Copied dataset = load_dataset("glue", task) dataset["train"][0] { "sentence1": 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_7
[0] { "sentence1": 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .', "sentence2": 'Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .', "label": 1, "idx": 0, } From 🤗 Evaluate, load a metric for evaluating the model’s performance. The evaluation module returns the accuracy and F1 scores associated with this speci
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_8
tric for evaluating the model’s performance. The evaluation module returns the accuracy and F1 scores associated with this specific task. Copied metric = evaluate.load("glue", task) Now you can use the metric to write a function that computes the accuracy and F1 scores. The compute_metric function calculates the scores from the model predictions and labels: Copied import numpy as np def compute_metrics(eval_pred): predictions, label
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_9
res from the model predictions and labels: Copied import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) Preprocess dataset Initialize the tokenizer and configure the padding token to use. If you’re using a GPT, OPT, or BLOOM model, you should set the padding_side to the left; otherwise i
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_10
e the padding token to use. If you’re using a GPT, OPT, or BLOOM model, you should set the padding_side to the left; otherwise it’ll be set to the right. Tokenize the sentence pairs and truncate them to the maximum length. Copied if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")): padding_side = "left" else: padding_side = "right" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side)
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_11
eft" else: padding_side = "right" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side) if getattr(tokenizer, "pad_token_id") is None: tokenizer.pad_token_id = tokenizer.eos_token_id def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_12
tually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs Use map to apply the tokenize_function to the dataset, and remove the unprocessed columns because the model won’t need those. You should also rename the label column to labels because that is the expected name for the labels by models in the 🤗 Transformers library. Copied tokenized_datasets = dataset.
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_13
ecause that is the expected name for the labels by models in the 🤗 Transformers library. Copied tokenized_datasets = dataset.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) tokenized_datasets = tokenized_datasets.rename_column("label", "labels") Create a collator function with DataCollatorWithPadding to pad the examples in the batches to the longest sequence in the batch: Copied data_
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_14
r function with DataCollatorWithPadding to pad the examples in the batches to the longest sequence in the batch: Copied data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding="longest") Train P-tuning uses a prompt encoder to optimize the prompt parameters, so you’ll need to initialize the PromptEncoderConfig with several arguments: task_type: the type of task you’re training on, in this case it is sequence classification or
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_15
oderConfig with several arguments: task_type: the type of task you’re training on, in this case it is sequence classification or SEQ_CLS num_virtual_tokens: the number of virtual tokens to use, or in other words, the prompt encoder_hidden_size: the hidden size of the encoder used to optimize the prompt parameters Copied peft_config = PromptEncoderConfig(task_type="SEQ_CLS", num_virtual_tokens=20, encoder_hidden_size=128) Create the base robe
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_16
pied peft_config = PromptEncoderConfig(task_type="SEQ_CLS", num_virtual_tokens=20, encoder_hidden_size=128) Create the base roberta-large model from AutoModelForSequenceClassification, and then wrap the base model and peft_config with get_peft_model() to create a PeftModel. If you’re curious to see how many parameters you’re actually training compared to training on all the model parameters, you can print it out with print_trainable_parameters(
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_17
you’re actually training compared to training on all the model parameters, you can print it out with print_trainable_parameters(): Copied model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True) model = get_peft_model(model, peft_config) model.print_trainable_parameters() "trainable params: 1351938 || all params: 355662082 || trainable%: 0.38011867680626127" From the 🤗 Transformers library, set up the
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_18
inable params: 1351938 || all params: 355662082 || trainable%: 0.38011867680626127" From the 🤗 Transformers library, set up the TrainingArguments class with where you want to save the model to, the training hyperparameters, how to evaluate the model, and when to save the checkpoints: Copied training_args = TrainingArguments( output_dir="your-name/roberta-large-peft-p-tuning", learning_rate=1e-3, per_device_train_batch_size=32,
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_19
rguments( output_dir="your-name/roberta-large-peft-p-tuning", learning_rate=1e-3, per_device_train_batch_size=32, per_device_eval_batch_size=32, num_train_epochs=2, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) Then pass the model, TrainingArguments, datasets, tokenizer, data collator, and evaluation function to the Trainer class, which’ll handle the ent
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_20
el, TrainingArguments, datasets, tokenizer, data collator, and evaluation function to the Trainer class, which’ll handle the entire training loop for you. Once you’re ready, call train to start training! Copied trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=comp
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_21
eval_dataset=tokenized_datasets["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() Share model You can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted: Copied from huggingface_hub import notebook_login notebook_login() Upload the model to a specifc model repository on the Hub with the pu
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_22
om huggingface_hub import notebook_login notebook_login() Upload the model to a specifc model repository on the Hub with the push_to_hub function: Copied model.push_to_hub("your-name/roberta-large-peft-p-tuning", use_auth_token=True) Inference Once the model has been uploaded to the Hub, anyone can easily use it for inference. Load the configuration and model: Copied import torch from peft import PeftModel, PeftConfig from transformer
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_23
for inference. Load the configuration and model: Copied import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "smangrul/roberta-large-peft-p-tuning" config = PeftConfig.from_pretrained(peft_model_id) inference_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_24
assification.from_pretrained(config.base_model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(inference_model, peft_model_id) Get some text and tokenize it: Copied classes = ["not equivalent", "equivalent"] sentence1 = "Coast redwood trees are the tallest trees on the planet and can grow over 300 feet tall." sentence2 = "The coast redwood trees, which can attain a he
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_25
re the tallest trees on the planet and can grow over 300 feet tall." sentence2 = "The coast redwood trees, which can attain a height of over 300 feet, are the tallest trees on earth." inputs = tokenizer(sentence1, sentence2, truncation=True, padding="longest", return_tensors="pt") Pass the inputs to the model to classify the sentences: Copied with torch.no_grad(): outputs = model(**inputs).logits print(outputs) paraphrased_text = t
d235da59d19a372d0b7fda265c067f4e.txt
d235da59d19a372d0b7fda265c067f4e.txt_chunk_26
ify the sentences: Copied with torch.no_grad(): outputs = model(**inputs).logits print(outputs) paraphrased_text = torch.softmax(outputs, dim=1).tolist()[0] for i in range(len(classes)): print(f"{classes[i]}: {int(round(paraphrased_text[i] * 100))}%") "not equivalent: 4%" "equivalent: 96%"
d235da59d19a372d0b7fda265c067f4e.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_1
Prefix tuning for conditional generation Prefix tuning is an additive method where only a sequence of continuous task-specific vectors is attached to the beginning of the input, or prefix. Only the prefix parameters are optimized and added to the hidden states in every layer of the model. The tokens of the input sequence can still attend to the prefix as virtual tokens. As a result, prefix tuning stores 1000x fewer parameters than
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_2
input sequence can still attend to the prefix as virtual tokens. As a result, prefix tuning stores 1000x fewer parameters than a fully finetuned model, which means you can use one large language model for many tasks. 💡 Read Prefix-Tuning: Optimizing Continuous Prompts for Generation to learn more about prefix tuning. This guide will show you how to apply prefix tuning to train a t5-large model on the sentences_allagree subset of the financial
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_3
This guide will show you how to apply prefix tuning to train a t5-large model on the sentences_allagree subset of the financial_phrasebank dataset. Before you begin, make sure you have all the necessary libraries installed: Copied !pip install -q peft transformers datasets Setup Start by defining the model and tokenizer, text and label columns, and some hyperparameters so it’ll be easier to start training faster later. Set the environmen
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_4
okenizer, text and label columns, and some hyperparameters so it’ll be easier to start training faster later. Set the environment variable TOKENIZERS_PARALLELSIM to false to disable the fast Rust-based tokenizer which processes data in parallel by default so you can use multiprocessing in Python. Copied from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, default_data_collator, get_linear_schedule_with_warmup from peft import get_p
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_5
rmers import AutoTokenizer, AutoModelForSeq2SeqLM, default_data_collator, get_linear_schedule_with_warmup from peft import get_peft_config, get_peft_model, get_peft_model_state_dict, PrefixTuningConfig, TaskType from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm import torch import os os.environ["TOKENIZERS_PARALLELISM"] = "false" os.environ["CUDA_VISIBLE_DEVICES"] = "3" device = "cuda" model_name_
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_6
import os os.environ["TOKENIZERS_PARALLELISM"] = "false" os.environ["CUDA_VISIBLE_DEVICES"] = "3" device = "cuda" model_name_or_path = "t5-large" tokenizer_name_or_path = "t5-large" text_column = "sentence" label_column = "text_label" max_length = 128 lr = 1e-2 num_epochs = 5 batch_size = 8 Load dataset For this guide, you’ll train on the sentences_allagree subset of the financial_phrasebank dataset. This dataset contains financial news
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_7
guide, you’ll train on the sentences_allagree subset of the financial_phrasebank dataset. This dataset contains financial news categorized by sentiment. Use 🤗 Datasets train_test_split function to create a training and validation split and convert the label value to the more readable text_label. All of the changes can be applied with the map function: Copied from datasets import load_dataset dataset = load_dataset("financial_phrasebank", "
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_8
be applied with the map function: Copied from datasets import load_dataset dataset = load_dataset("financial_phrasebank", "sentences_allagree") dataset = dataset["train"].train_test_split(test_size=0.1) dataset["validation"] = dataset["test"] del dataset["test"] classes = dataset["train"].features["label"].names dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1,
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_9
dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1, ) dataset["train"][0] {"sentence": "Profit before taxes was EUR 4.0 mn , down from EUR 4.9 mn .", "label": 0, "text_label": "negative"} Preprocess dataset Initialize a tokenizer, and create a function to pad and truncate the model_inputs and labels: Copied tokenizer = AutoTokenizer.from_pretrained(model_name_or
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_10
te a function to pad and truncate the model_inputs and labels: Copied tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True,
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_11
gth", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True, return_tensors="pt") labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs Use the map function to apply the preprocess_function to the dataset. You can remove the unprocessed columns since the model doesn’t need them anymore:
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_12
pply the preprocess_function to the dataset. You can remove the unprocessed columns since the model doesn’t need them anymore: Copied processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the dat
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_13
="Running tokenizer on dataset", ) Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU. Copied train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["validation"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=Tr
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_14
dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) Train model Now you can setup your model and make sure it is ready for training. Specify the task in PrefixTuningConfig, create the base t5-large model from AutoModelForSeq2SeqLM, and then wrap t
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_15
for training. Specify the task in PrefixTuningConfig, create the base t5-large model from AutoModelForSeq2SeqLM, and then wrap the model and configuration in a PeftModel. Feel free to print the PeftModel’s parameters and compare it to fully training all the model parameters to see how much more efficient it is! Copied peft_config = PrefixTuningConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, num_virtual_tokens=20) model = AutoM
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_16
ed peft_config = PrefixTuningConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, num_virtual_tokens=20) model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) model.print_trainable_parameters() "trainable params: 983040 || all params: 738651136 || trainable%: 0.13308583065659835" Setup the optimizer and learning rate scheduler: Copied optimizer = torch.optim.AdamW(model.paramet
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_17
le%: 0.13308583065659835" Setup the optimizer and learning rate scheduler: Copied optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) Move the model to the GPU, and then write a training loop to begin! Copied model = model.to(device) for epoch in range(num_epochs): model.
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_18
the GPU, and then write a training loop to begin! Copied model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_19
outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_lo
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_20
in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_21
_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") Let’s see how well the model performs on the validation set: Copied correct = 0 total = 0 for pred, true in zip(eval_preds, dataset["validation"]["text_label"]):
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_22
the validation set: Copied correct = 0 total = 0 for pred, true in zip(eval_preds, dataset["validation"]["text_label"]): if pred.strip() == true.strip(): correct += 1 total += 1 accuracy = correct / total * 100 print(f"{accuracy=} % on the evaluation dataset") print(f"{eval_preds[:10]=}") print(f"{dataset['validation']['text_label'][:10]=}") "accuracy=97.3568281938326 % on the evaluation dataset" "eval_preds[:10]=['neutral',
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_23
taset['validation']['text_label'][:10]=}") "accuracy=97.3568281938326 % on the evaluation dataset" "eval_preds[:10]=['neutral', 'positive', 'neutral', 'positive', 'neutral', 'negative', 'negative', 'neutral', 'neutral', 'neutral']" "dataset['validation']['text_label'][:10]=['neutral', 'positive', 'neutral', 'positive', 'neutral', 'negative', 'negative', 'neutral', 'neutral', 'neutral']" 97% accuracy in just a few minutes; pretty good! Share mo
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_24
'neutral', 'negative', 'negative', 'neutral', 'neutral', 'neutral']" 97% accuracy in just a few minutes; pretty good! Share model You can store and share your model on the Hub if you’d like. Login to your Hugging Face account and enter your token when prompted: Copied from huggingface_hub import notebook_login notebook_login() Upload the model to a specifc model repository on the Hub with the push_to_hub function: Copied peft_model_i
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_25
notebook_login() Upload the model to a specifc model repository on the Hub with the push_to_hub function: Copied peft_model_id = "your-name/t5-large_PREFIX_TUNING_SEQ2SEQ" model.push_to_hub("your-name/t5-large_PREFIX_TUNING_SEQ2SEQ", use_auth_token=True) If you check the model file size in the repository, you’ll see that it is only 3.93MB! 🤏 Inference Once the model has been uploaded to the Hub, anyone can easily use it for inference. Loa
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_26
that it is only 3.93MB! 🤏 Inference Once the model has been uploaded to the Hub, anyone can easily use it for inference. Load the configuration and model: Copied from peft import PeftModel, PeftConfig peft_model_id = "stevhliu/t5-large_PREFIX_TUNING_SEQ2SEQ" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_27
odel = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) Get and tokenize some text about financial news: Copied inputs = tokenizer( "The Lithuanian beer market made up 14.41 million liters in January , a rise of 0.8 percent from the year-earlier figure , the Lithuanian Brewers ' Association reporting citing the results from its members .", return_tensors="pt
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_28
r-earlier figure , the Lithuanian Brewers ' Association reporting citing the results from its members .", return_tensors="pt", ) Put the model on a GPU and generate the predicted text sentiment: Copied model.to(device) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(
cb5cf13a6047f4b3b8e5ad4da477b066.txt
cb5cf13a6047f4b3b8e5ad4da477b066.txt_chunk_29
model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) ["positive"]
cb5cf13a6047f4b3b8e5ad4da477b066.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_1
DeepSpeed DeepSpeed is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_2
duces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization. Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT. This guide will help you learn how to use our DeepSpeed training script. You’ll configure the script to train a large model for conditional gen
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_3
help you learn how to use our DeepSpeed training script. You’ll configure the script to train a large model for conditional generation with ZeRO-3 and ZeRO-Offload. 💡 To help you get started, check out our example training scripts for causal language modeling and conditional generation. You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts. Configuration Start
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_4
your own applications or even use them out of the box if your task is similar to the one in the scripts. Configuration Start by running the following command to create a DeepSpeed configuration file with 🤗 Accelerate. The --config_file flag allows you to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in the 🤗 Accelerate cache. The configuration file is used to set the default options
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_5
it is saved as a default_config.yaml file in the 🤗 Accelerate cache. The configuration file is used to set the default options when you launch the training script. Copied accelerate config --config_file ds_zero3_cpu.yaml You’ll be asked a few questions about your setup, and configure the following arguments. In this example, you’ll use ZeRO-3 and ZeRO-Offload so make sure you pick those options. Copied `zero_stage`: [0] Disabled, [1] opt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_6
s example, you’ll use ZeRO-3 and ZeRO-Offload so make sure you pick those options. Copied `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. `gradient_clipping`: Enable gradient clipping with value. `offload_optimizer_device`:
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_7
dients before averaging and applying them. `gradient_clipping`: Enable gradient clipping with value. `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. `zero3
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_8
er offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16`
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_9
weights when using ZeRO Stage-3. `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. An example configuration file might look like the following. The most important thing to notice is that zero_stage is set to 3, and offload_optimizer_device and offload_param_device are set to the cpu. Copied compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumula
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_10
and offload_param_device are set to the cpu. Copied compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' dynamo_backend: 'NO' fsdp_config: {} machine_rank: 0 main_training_function: main megatron_lm_config:
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_11
SPEED downcast_bf16: 'no' dynamo_backend: 'NO' fsdp_config: {} machine_rank: 0 main_training_function: main megatron_lm_config: {} mixed_precision: 'no' num_machines: 1 num_processes: 1 rdzv_backend: static same_network: true use_cpu: false The important parts Let’s dive a little deeper into the script so you can see what’s going on, and understand how it works. Within the main function, the script creates an Accelerator class to initialize
2dc6cfa27d4b32a85ca96b1c6023acf2.txt
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_12
e what’s going on, and understand how it works. Within the main function, the script creates an Accelerator class to initialize all the necessary requirements for distributed training. 💡 Feel free to change the model and dataset inside the main function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function. The script also creates a configuration for the 🤗 PEFT method you’re
2dc6cfa27d4b32a85ca96b1c6023acf2.txt