chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_13 | ou may also need to write your own preprocessing function.
The script also creates a configuration for the π€ PEFT method youβre using, which in this case, is LoRA. The LoraConfig specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different π€ PEFT method, make sure you replace LoraConfig with the approp | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_14 | t probability of the LoRA layers. If you want to use a different π€ PEFT method, make sure you replace LoraConfig with the appropriate class.
Copied
def main():
+ accelerator = Accelerator()
model_name_or_path = "facebook/bart-large"
dataset_name = "twitter_complaints"
+ peft_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
Throughout the script, | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_15 | task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
Throughout the script, youβll see the main_process_first and wait_for_everyone functions which help control and synchronize when processes are executed.
The get_peft_model() function takes a base model and the peft_config you prepared earlier to create a PeftModel:
Copied
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_16 | config you prepared earlier to create a PeftModel:
Copied
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
+ model = get_peft_model(model, peft_config)
Pass all the relevant training objects to π€ Accelerateβs prepare which makes sure everything is ready for training:
Copied
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataload | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_17 | der, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
)
The next bit of code checks whether the DeepSpeed plugin is used in the Accelerator, and if the plugin exists, then the Accelerator uses ZeRO-3 as specified in the configuration file:
Copied
is_ds_zero_3 = False
if getattr(accelerator.state, "deepspeed_plugin", None):
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_18 | s specified in the configuration file:
Copied
is_ds_zero_3 = False
if getattr(accelerator.state, "deepspeed_plugin", None):
is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
Inside the training loop, the usual loss.backward() is replaced by π€ Accelerateβs backward which uses the correct backward() method based on your configuration:
Copied
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_19 | ) method based on your configuration:
Copied
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
+ accelerator.backward(loss)
optimizer.step()
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_20 | total_loss += loss.detach().float()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to ds_zero3_cpu.yaml, so youβll need to | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_21 | lowing command to launch the training script. Earlier, you saved the configuration file to ds_zero3_cpu.yaml, so youβll need to pass the path to the launcher with the --config_file argument like this:
Copied
accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
Youβll see some output logs that track memory usage during training, and once itβs completed, the script returns the accuracy and | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_22 | ouβll see some output logs that track memory usage during training, and once itβs completed, the script returns the accuracy and compares the predictions to the labels:
Copied
GPU Memory before entering the train : 1916
GPU Memory consumed at the end of the train (end-begin): 66
GPU Peak Memory consumed during the train (max-begin): 7488
GPU Total Peak Memory consumed during the train (max): 9404
CPU Memory before entering the train : 19411
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_23 | rain (max-begin): 7488
GPU Total Peak Memory consumed during the train (max): 9404
CPU Memory before entering the train : 19411
CPU Memory consumed at the end of the train (end-begin): 0
CPU Peak Memory consumed during the train (max-begin): 0
CPU Total Peak Memory consumed during the train (max): 19411
epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
100%|ββββββββββββββββββββββββββββββββββββββ | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_24 | ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7/7 [00:27<00:00, 3.92s/it]
GPU Memory before entering the eval : 1982
GPU Memory consumed at the end of the eval (end-begin): -66
GPU Peak Memory consumed during the eval (max-begin): 672
GPU Total Peak Memory consumed during the eval (max): 2654
CPU Memory befo | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_25 | Peak Memory consumed during the eval (max-begin): 672
GPU Total Peak Memory consumed during the eval (max): 2654
CPU Memory before entering the eval : 19411
CPU Memory consumed at the end of the eval (end-begin): 0
CPU Peak Memory consumed during the eval (max-begin): 0
CPU Total Peak Memory consumed during the eval (max): 19411
accuracy=100.0
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complai | 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_26 | ax): 19411
accuracy=100.0
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
2dc6cfa27d4b32a85ca96b1c6023acf2.txt_chunk_27 | o complaint', 'complaint', 'complaint', 'no complaint']
| 2dc6cfa27d4b32a85ca96b1c6023acf2.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_1 | DreamBooth fine-tuning with LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune DreamBooth with the
CompVis/stable-diffusion-v1-4 model.
Although LoRA was initially designed as a technique for reducing the number of trainable parameters in
large-language models, the technique can also be applied to diffusion models. Performing a complete model fine-tuning
of diffusion models is a time-consuming task | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_2 | ue can also be applied to diffusion models. Performing a complete model fine-tuning
of diffusion models is a time-consuming task, which is why lightweight techniques like DreamBooth or Textual Inversion
gained popularity. With the introduction of LoRA, customizing and fine-tuning a model on a specific dataset has become
even faster.
In this guide weβll be using a DreamBooth fine-tuning script that is available in
PEFTβs GitHub repo. Feel free t | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_3 | e
even faster.
In this guide weβll be using a DreamBooth fine-tuning script that is available in
PEFTβs GitHub repo. Feel free to explore it and
learn how things work.
Set up your environment
Start by cloning the PEFT repository:
Copied
git clone https://github.com/huggingface/peft
Navigate to the directory containing the training scripts for fine-tuning Dreambooth with LoRA:
Copied
cd peft/examples/lora_dreambooth
Set up your environ | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_4 | aining the training scripts for fine-tuning Dreambooth with LoRA:
Copied
cd peft/examples/lora_dreambooth
Set up your environment: install PEFT, and all the required libraries. At the time of writing this guide we recommend
installing PEFT from source.
Copied
pip install -r requirements.txt
pip install git+https://github.com/huggingface/peft
Fine-tuning DreamBooth
Prepare the images that you will use for fine-tuning the model. Set up | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_5 | s://github.com/huggingface/peft
Fine-tuning DreamBooth
Prepare the images that you will use for fine-tuning the model. Set up a few environment variables:
Copied
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="path-to-instance-images"
export CLASS_DIR="path-to-class-images"
export OUTPUT_DIR="path-to-save-model"
Here:
INSTANCE_DIR: The directory containing the images that you intend to use for training your model | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_6 | DIR="path-to-save-model"
Here:
INSTANCE_DIR: The directory containing the images that you intend to use for training your model.
CLASS_DIR: The directory containing class-specific images. In this example, we use prior preservation to avoid overfitting and language-drift. For prior preservation, you need other images of the same class as part of the training process. However, these images can be generated and the training script will save them | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_7 | f the same class as part of the training process. However, these images can be generated and the training script will save them to a local path you specify here.
OUTPUT_DIR: The destination folder for storing the trained modelβs weights.
To learn more about DreamBooth fine-tuning with prior-preserving loss, check out the Diffusers documentation.
Launch the training script with accelerate and pass hyperparameters, as well as LoRa-specific argume | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_8 | he Diffusers documentation.
Launch the training script with accelerate and pass hyperparameters, as well as LoRa-specific arguments to it such as:
use_lora: Enables LoRa in the training script.
lora_r: The dimension used by the LoRA update matrices.
lora_alpha: Scaling factor.
lora_text_encoder_r: LoRA rank for text encoder.
lora_text_encoder_alpha: LoRA alpha (scaling factor) for text encoder.
Hereβs what the full set of script arguments may | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_9 | encoder.
lora_text_encoder_alpha: LoRA alpha (scaling factor) for text encoder.
Hereβs what the full set of script arguments may look like:
Copied
accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--class_data_dir=$CLASS_DIR \
--output_dir=$OUTPUT_DIR \
--train_text_encoder \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="a photo of | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_10 | dir=$OUTPUT_DIR \
--train_text_encoder \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="a photo of sks dog" \
--class_prompt="a photo of dog" \
--resolution=512 \
--train_batch_size=1 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images=200 \
--use_lora \
--lora_r 16 \
--lora_alpha 27 \
--lora_text_encoder_r 16 \
--lora_text_encoder_alpha 17 \
--learning_rate=1e-4 \
--gra | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_11 | --lora_r 16 \
--lora_alpha 27 \
--lora_text_encoder_r 16 \
--lora_text_encoder_alpha 17 \
--learning_rate=1e-4 \
--gradient_accumulation_steps=1 \
--gradient_checkpointing \
--max_train_steps=800
Inference with a single adapter
To run inference with the fine-tuned model, first specify the base model with which the fine-tuned LoRA weights will be combined:
Copied
import os
import torch
from diffusers import StableDiffusionPi | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_12 | th which the fine-tuned LoRA weights will be combined:
Copied
import os
import torch
from diffusers import StableDiffusionPipeline
from peft import PeftModel, LoraConfig
MODEL_NAME = "CompVis/stable-diffusion-v1-4"
Next, add a function that will create a Stable Diffusion pipeline for image generation. It will combine the weights of
the base model with the fine-tuned LoRA weights using LoraConfig.
Copied
def get_lora_sd_pipeline(
ckp | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_13 | ine the weights of
the base model with the fine-tuned LoRA weights using LoraConfig.
Copied
def get_lora_sd_pipeline(
ckpt_dir, base_model_name_or_path=None, dtype=torch.float16, device="cuda", adapter_name="default"
):
unet_sub_dir = os.path.join(ckpt_dir, "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
if os.path.exists(text_encoder_sub_dir) and base_model_name_or_path is None:
config = LoraCon | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_14 | t_dir, "text_encoder")
if os.path.exists(text_encoder_sub_dir) and base_model_name_or_path is None:
config = LoraConfig.from_pretrained(text_encoder_sub_dir)
base_model_name_or_path = config.base_model_name_or_path
if base_model_name_or_path is None:
raise ValueError("Please specify the base model name or path")
pipe = StableDiffusionPipeline.from_pretrained(base_model_name_or_path, torch_dtype=dtype).to(de | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_15 | base model name or path")
pipe = StableDiffusionPipeline.from_pretrained(base_model_name_or_path, torch_dtype=dtype).to(device)
pipe.unet = PeftModel.from_pretrained(pipe.unet, unet_sub_dir, adapter_name=adapter_name)
if os.path.exists(text_encoder_sub_dir):
pipe.text_encoder = PeftModel.from_pretrained(
pipe.text_encoder, text_encoder_sub_dir, adapter_name=adapter_name
)
if dtype in (torch.float1 | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_16 | trained(
pipe.text_encoder, text_encoder_sub_dir, adapter_name=adapter_name
)
if dtype in (torch.float16, torch.bfloat16):
pipe.unet.half()
pipe.text_encoder.half()
pipe.to(device)
return pipe
Now you can use the function above to create a Stable Diffusion pipeline using the LoRA weights that you have created during the fine-tuning step.
Note, if youβre running inference on the same machine, the | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_17 | g the LoRA weights that you have created during the fine-tuning step.
Note, if youβre running inference on the same machine, the path you specify here will be the same as OUTPUT_DIR.
Copied
pipe = get_lora_sd_pipeline(Path("path-to-saved-model"), adapter_name="dog")
Once you have the pipeline with your fine-tuned model, you can use it to generate images:
Copied
prompt = "sks dog playing fetch in the park"
negative_prompt = "low quality | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_18 | model, you can use it to generate images:
Copied
prompt = "sks dog playing fetch in the park"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image.save("DESTINATION_PATH_FOR_THE_IMAGE")
Multi-adapter inference
With PEFT you can combine multiple adapters for inference. In the previous example you have fine-tuned Stable Diffusion | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_19 | erence
With PEFT you can combine multiple adapters for inference. In the previous example you have fine-tuned Stable Diffusion on
some dog images. The pipeline created based on these weights got a name - adapter_name="dog. Now, suppose you also fine-tuned
this base model on images of a crochet toy. Letβs see how we can use both adapters.
First, youβll need to perform all the steps as in the single adapter inference example:
Specify the base | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_20 | an use both adapters.
First, youβll need to perform all the steps as in the single adapter inference example:
Specify the base model.
Add a function that creates a Stable Diffusion pipeline for image generation uses LoRA weights.
Create a pipe with adapter_name="dog" based on the model fine-tuned on dog images.
Next, youβre going to need a few more helper functions.
To load another adapter, create a load_adapter() function that leverages load_ | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_21 | youβre going to need a few more helper functions.
To load another adapter, create a load_adapter() function that leverages load_adapter() method of PeftModel (e.g. pipe.unet.load_adapter(peft_model_path, adapter_name)):
Copied
def load_adapter(pipe, ckpt_dir, adapter_name):
unet_sub_dir = os.path.join(ckpt_dir, "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
pipe.unet.load_adapter(unet_sub_dir, adapter_name= | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_22 | "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
pipe.unet.load_adapter(unet_sub_dir, adapter_name=adapter_name)
if os.path.exists(text_encoder_sub_dir):
pipe.text_encoder.load_adapter(text_encoder_sub_dir, adapter_name=adapter_name)
To switch between adapters, write a function that uses set_adapter() method of PeftModel (see pipe.unet.set_adapter(adapter_name))
Copied
def set_adapter(pipe, adapter_na | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_23 | that uses set_adapter() method of PeftModel (see pipe.unet.set_adapter(adapter_name))
Copied
def set_adapter(pipe, adapter_name):
pipe.unet.set_adapter(adapter_name)
if isinstance(pipe.text_encoder, PeftModel):
pipe.text_encoder.set_adapter(adapter_name)
Finally, add a function to create weighted LoRA adapter.
Copied
def create_weighted_lora_adapter(pipe, adapters, weights, adapter_name="default"):
pipe.unet.add_weigh | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_24 | A adapter.
Copied
def create_weighted_lora_adapter(pipe, adapters, weights, adapter_name="default"):
pipe.unet.add_weighted_adapter(adapters, weights, adapter_name)
if isinstance(pipe.text_encoder, PeftModel):
pipe.text_encoder.add_weighted_adapter(adapters, weights, adapter_name)
return pipe
Letβs load the second adapter from the model fine-tuned on images of a crochet toy, and give it a unique name:
Copied
load_ada | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_25 | etβs load the second adapter from the model fine-tuned on images of a crochet toy, and give it a unique name:
Copied
load_adapter(pipe, Path("path-to-the-second-saved-model"), adapter_name="crochet")
Create a pipeline using weighted adapters:
Copied
pipe = create_weighted_lora_adapter(pipe, ["crochet", "dog"], [1.0, 1.05], adapter_name="crochet_dog")
Now you can switch between adapters. If youβd like to generate more dog images, set the a | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_26 | .0, 1.05], adapter_name="crochet_dog")
Now you can switch between adapters. If youβd like to generate more dog images, set the adapter to "dog":
Copied
set_adapter(pipe, adapter_name="dog")
prompt = "sks dog in a supermarket isle"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
In the same way, you can switch to the second ada | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_27 | _steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
In the same way, you can switch to the second adapter:
Copied
set_adapter(pipe, adapter_name="crochet")
prompt = "a fish rendered in the style of <1>"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
Finally, you can use combined weighted adapters: | d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_28 | _steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
Finally, you can use combined weighted adapters:
Copied
set_adapter(pipe, adapter_name="crochet_dog")
prompt = "sks dog rendered in the style of <1>, close up portrait, 4K HD"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
| d07fa13a3fe1301188b103c336101329.txt |
d07fa13a3fe1301188b103c336101329.txt_chunk_29 | ipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]
image
| d07fa13a3fe1301188b103c336101329.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_1 | Configuration
The configuration classes stores the configuration of a PeftModel, PEFT adapter models, and the configurations of PrefixTuning, PromptTuning, and PromptEncoder. They contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
PeftConfigMixin
class peft.utils.config.PeftConfig | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_2 | model configurations like number of layers and number of attention heads.
PeftConfigMixin
class peft.utils.config.PeftConfigMixin
<
source
>
(
peft_type: typing.Optional[peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
)
Parameters
peft_type (Union[PeftType, str]) β The type of Peft method to use.
This is the base configuration class for PEFT adapter models. It contains all the methods that are common t | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_3 | ft method to use.
This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all
PEFT adapter models. This class inherits from PushToHubMixin which contains the methods to
push your model to the Hub. The method save_pretrained will save the configuration of your adapter model in a
directory. The method from_pretrained will load the configuration of your adapter model from a directory.
from_j | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_4 | ter model in a
directory. The method from_pretrained will load the configuration of your adapter model from a directory.
from_json_file
<
source
>
(
path_json_file
**kwargs
)
Parameters
path_json_file (str) β
The path to the json file.
Loads a configuration file from a json file.
from_pretrained
<
source
>
(
pretrained_model_name_or_path
subfolder = None
**kwargs
)
Parameters
pretrained_model_name_or_path (str) β
The directory | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_5 | (
pretrained_model_name_or_path
subfolder = None
**kwargs
)
Parameters
pretrained_model_name_or_path (str) β
The directory or the Hub repository id where the configuration is saved.
kwargs (additional keyword arguments, optional) β
Additional keyword arguments passed along to the child class initialization.
This method loads the configuration of your adapter model from a directory.
save_pretrained
<
source
>
(
save_directory
**kwarg | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_6 | his method loads the configuration of your adapter model from a directory.
save_pretrained
<
source
>
(
save_directory
**kwargs
)
Parameters
save_directory (str) β
The directory where the configuration will be saved.
kwargs (additional keyword arguments, optional) β
Additional keyword arguments passed along to the push_to_hub
method.
This method saves the configuration of your adapter model in a directory.
PeftConfig
class peft. | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_7 | the push_to_hub
method.
This method saves the configuration of your adapter model in a directory.
PeftConfig
class peft.PeftConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
)
Parameters
peft_type (Union[Pef | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_8 | k_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
)
Parameters
peft_type (Union[PeftType, str]) β The type of Peft method to use.
task_type (Union[TaskType, str]) β The type of task to perform.
inference_mode (bool, defaults to False) β Whether to use the Peft model in inference mode.
This is the base configuration class to store the configuration of a PeftModel.
PromptLearningConfig
class | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_9 | nference mode.
This is the base configuration class to store the configuration of a PeftModel.
PromptLearningConfig
class peft.PromptLearningConfig
<
source
>
(
peft_type: typing.Union[str, peft.utils.config.PeftType] = None
auto_mapping: typing.Optional[dict] = None
base_model_name_or_path: str = None
revision: str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tokens: in | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_10 | : str = None
task_type: typing.Union[str, peft.utils.config.TaskType] = None
inference_mode: bool = False
num_virtual_tokens: int = None
token_dim: int = None
num_transformer_submodules: typing.Optional[int] = None
num_attention_heads: typing.Optional[int] = None
num_layers: typing.Optional[int] = None
)
Parameters
num_virtual_tokens (int) β The number of virtual tokens to use.
token_dim (int) β The hidden embedding dimension of the base | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_11 |
num_virtual_tokens (int) β The number of virtual tokens to use.
token_dim (int) β The hidden embedding dimension of the base transformer model.
num_transformer_submodules (int) β The number of transformer submodules in the base transformer model.
num_attention_heads (int) β The number of attention heads in the base transformer model.
num_layers (int) β The number of layers in the base transformer model.
This is the base configurati | a0892faec928d4d6117e1a772e14e515.txt |
a0892faec928d4d6117e1a772e14e515.txt_chunk_12 | base transformer model.
num_layers (int) β The number of layers in the base transformer model.
This is the base configuration class to store the configuration of PrefixTuning, PromptEncoder, or
PromptTuning.
| a0892faec928d4d6117e1a772e14e515.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.