TRL documentation

CPO Trainer

You are viewing v0.14.0 version. A newer version v0.15.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

CPO Trainer

Overview

Contrastive Preference Optimization (CPO) as introduced in the paper Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation by Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. At a high-level, CPO trains models to avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat.

CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.

Quick start

This example demonstrates how to train a model using the CPO method. We use the Qwen 0.5B model as the base model. We use the preference data from the UltraFeedback dataset. You can view the data in the dataset here:

Below is the script to train the model:

# train_cpo.py
from datasets import load_dataset
from trl import CPOConfig, CPOTrainer
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
train_dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")

training_args = CPOConfig(output_dir="Qwen2-0.5B-CPO", logging_steps=10)
trainer = CPOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=train_dataset)
trainer.train()

Execute the script using the following command:

accelerate launch train_cpo.py

Expected dataset type

CPO requires a preference dataset. The CPOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.

Example script

We provide an example script to train a model using the CPO method. The script is available in examples/scripts/cpo.py

To test the CPO script with the Qwen2 0.5B model on the UltraFeedback dataset, run the following command:

accelerate launch examples/scripts/cpo.py \
    --model_name_or_path Qwen/Qwen2-0.5B-Instruct \
    --dataset_name trl-lib/ultrafeedback_binarized \
    --num_train_epochs 1 \
    --logging_steps 25 \
    --output_dir Qwen2-0.5B-CPO

Logged metrics

While training and evaluating we record the following reward metrics:

  • rewards/chosen: the mean log probabilities of the policy model for the chosen responses scaled by beta
  • rewards/rejected: the mean log probabilities of the policy model for the rejected responses scaled by beta
  • rewards/accuracies: mean of how often the chosen rewards are > than the corresponding rejected rewards
  • rewards/margins: the mean difference between the chosen and corresponding rejected rewards
  • nll_loss: the mean negative log likelihood loss of the policy model for the chosen responses

CPO variants

Simple Preference Optimization (SimPO)

The SimPO method is also implemented in the CPOTrainer. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, we can use SimPO easily by turning on loss_type="simpo" and cpo_alpha=0.0 in the CPOConfig.

CPO-SimPO

We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at CPO-SimPO GitHub. To use this method, simply enable SimPO by setting loss_type="simpo" and a non-zero cpo_alpha in the CPOConfig.

Loss functions

The CPO algorithm supports several loss functions. The loss function can be set using the loss_type parameter in the CPOConfig. The following loss functions are supported:

loss_type= Description
"sigmoid" (default) Given the preference data, we can fit a binary classifier according to the Bradley-Terry model and in fact the DPO authors propose the sigmoid loss on the normalized likelihood via the logsigmoid to fit a logistic regression.
"hinge" The RSO authors propose to use a hinge loss on the normalized likelihood from the SLiC paper. In this case, the beta is the reciprocal of the margin.
"ipo" The IPO authors provide a deeper theoretical understanding of the DPO algorithms and identify an issue with overfitting and propose an alternative loss. In this case, the beta is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the beta the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike DPO which is summed only).

For Mixture of Experts Models: Enabling the auxiliary loss

MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.

This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001) in the model config.

CPOTrainer

class trl.CPOTrainer

< >

( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = Noneargs: typing.Optional[trl.trainer.cpo_config.CPOConfig] = Nonedata_collator: typing.Optional[transformers.data.data_collator.DataCollator] = Nonetrain_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = Noneeval_dataset: typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = Noneprocessing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = Nonemodel_init: typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = Nonecallbacks: typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: tuple = (None, None)preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = Nonepeft_config: typing.Optional[dict] = Nonecompute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], dict]] = None )

Parameters

  • model (transformers.PreTrainedModel) β€” The model to train, preferably an AutoModelForSequenceClassification.
  • args (CPOConfig) β€” The CPO config arguments to use for training.
  • data_collator (transformers.DataCollator) β€” The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences.
  • train_dataset (datasets.Dataset) β€” The dataset to use for training.
  • eval_dataset (datasets.Dataset) β€” The dataset to use for evaluation.
  • processing_class (PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin, optional) β€” Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
  • model_init (Callable[[], transformers.PreTrainedModel]) β€” The model initializer to use for training. If None is specified, the default model initializer will be used.
  • callbacks (list[transformers.TrainerCallback]) β€” The callbacks to use for training.
  • optimizers (tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]) β€” The optimizer and scheduler to use for training.
  • preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor]) β€” The function to use to preprocess the logits before computing the metrics.
  • peft_config (dict, defaults to None) β€” The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model.
  • compute_metrics (Callable[[EvalPrediction], dict], optional) β€” The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values.

Initialize CPOTrainer.

build_tokenized_answer

< >

( promptanswer )

Llama tokenizer does satisfy enc(a + b) = enc(a) + enc(b). It does ensure enc(a + b) = enc(a) + enc(a + b)[len(enc(a)):]. Reference: https://github.com/EleutherAI/lm-evaluation-harness/pull/531#issuecomment-1595586257

concatenated_forward

< >

( model: Modulebatch: dict )

Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together.

We do this to avoid doing two forward passes, because it’s faster for FSDP.

concatenated_inputs

< >

( batch: dictis_encoder_decoder: bool = Falselabel_pad_token_id: int = -100padding_value: int = 0device: typing.Optional[torch.device] = None )

Parameters

  • batch β€” A batch of data. Must contain the keys β€˜chosen_input_ids’ and β€˜rejected_input_ids’, which are tensors of shape (batch_size, sequence_length).
  • is_encoder_decoder β€” Whether the model is an encoder-decoder model.
  • label_pad_token_id β€” The label pad token id.
  • padding_value β€” The padding value to use for the concatenated inputs_ids.
  • device β€” The device for the concatenated inputs.

Concatenate the chosen and rejected inputs into a single tensor.

cpo_loss

< >

( policy_chosen_logps: FloatTensorpolicy_rejected_logps: FloatTensor ) β†’ A tuple of three tensors

Parameters

  • policy_chosen_logps β€” Log probabilities of the policy model for the chosen responses. Shape: (batch_size,)
  • policy_rejected_logps β€” Log probabilities of the policy model for the rejected responses. Shape: (batch_size,)

Returns

A tuple of three tensors

(losses, chosen_rewards, rejected_rewards). The losses tensor contains the CPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively.

Compute the CPO loss for a batch of policy and reference model log probabilities.

create_model_card

< >

( model_name: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Nonetags: typing.Union[str, list[str], NoneType] = None )

Parameters

  • model_name (str or None, optional, defaults to None) β€” Name of the model.
  • dataset_name (str or None, optional, defaults to None) β€” Name of the dataset used for training.
  • tags (str, list[str] or None, optional, defaults to None) β€” Tags to be associated with the model card.

Creates a draft of a model card using the information available to the Trainer.

evaluation_loop

< >

( dataloader: DataLoaderdescription: strprediction_loss_only: typing.Optional[bool] = Noneignore_keys: typing.Optional[list[str]] = Nonemetric_key_prefix: str = 'eval' )

Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().

Works both with or without labels.

generate_from_model

< >

( modelbatch: dict )

Generate samples from the model and reference model for the given batch of inputs.

get_batch_logps

< >

( logits: FloatTensorlabels: LongTensoraverage_log_prob: bool = Falselabel_pad_token_id: int = -100is_encoder_decoder: bool = False )

Parameters

  • logits β€” Logits of the model (unnormalized). Shape: (batch_size, sequence_length, vocab_size)
  • labels β€” Labels for which to compute the log probabilities. Label tokens with a value of label_pad_token_id are ignored. Shape: (batch_size, sequence_length)
  • average_log_prob β€” If True, return the average log probability per (non-masked) token. Otherwise, return the sum of the log probabilities of the (non-masked) tokens.
  • label_pad_token_id β€” The label pad token id.
  • is_encoder_decoder β€” Whether the model is an encoder-decoder model.

Compute the log probabilities of the given labels under the given logits.

get_batch_loss_metrics

< >

( modelbatch: dicttrain_eval: typing.Literal['train', 'eval'] = 'train' )

Compute the CPO loss and other metrics for the given batch of inputs for train or test.

log

< >

( logs: dictstart_time: typing.Optional[float] = None )

Parameters

  • logs (dict[str, float]) β€” The values to log.
  • start_time (float or None, optional, defaults to None) β€” Start time of the training.

Log logs on the various objects watching training, including stored metrics.

tokenize_row

< >

( featuremodel: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None )

Tokenize a single row from a CPO specific dataset.

At this stage, we don’t convert to PyTorch tensors yet; we just handle the truncation in case the prompt + chosen or prompt + rejected responses is/are too long. First we truncate the prompt; if we’re still too long, we truncate the chosen/rejected.

We also create the labels for the chosen/rejected responses, which are of length equal to the sum of the length of the prompt and the chosen/rejected response, with label_pad_token_id for the prompt tokens.

CPOConfig

class trl.CPOConfig

< >

( output_dir: stroverwrite_output_dir: bool = Falsedo_train: bool = Falsedo_eval: bool = Falsedo_predict: bool = Falseeval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'prediction_loss_only: bool = Falseper_device_train_batch_size: int = 8per_device_eval_batch_size: int = 8per_gpu_train_batch_size: typing.Optional[int] = Noneper_gpu_eval_batch_size: typing.Optional[int] = Nonegradient_accumulation_steps: int = 1eval_accumulation_steps: typing.Optional[int] = Noneeval_delay: typing.Optional[float] = 0torch_empty_cache_steps: typing.Optional[int] = Nonelearning_rate: float = 1e-06weight_decay: float = 0.0adam_beta1: float = 0.9adam_beta2: float = 0.999adam_epsilon: float = 1e-08max_grad_norm: float = 1.0num_train_epochs: float = 3.0max_steps: int = -1lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'lr_scheduler_kwargs: typing.Union[dict, str, NoneType] = <factory>warmup_ratio: float = 0.0warmup_steps: int = 0log_level: typing.Optional[str] = 'passive'log_level_replica: typing.Optional[str] = 'warning'log_on_each_node: bool = Truelogging_dir: typing.Optional[str] = Nonelogging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'logging_first_step: bool = Falselogging_steps: float = 500logging_nan_inf_filter: bool = Truesave_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps'save_steps: float = 500save_total_limit: typing.Optional[int] = Nonesave_safetensors: typing.Optional[bool] = Truesave_on_each_node: bool = Falsesave_only_model: bool = Falserestore_callback_states_from_checkpoint: bool = Falseno_cuda: bool = Falseuse_cpu: bool = Falseuse_mps_device: bool = Falseseed: int = 42data_seed: typing.Optional[int] = Nonejit_mode_eval: bool = Falseuse_ipex: bool = Falsebf16: bool = Falsefp16: bool = Falsefp16_opt_level: str = 'O1'half_precision_backend: str = 'auto'bf16_full_eval: bool = Falsefp16_full_eval: bool = Falsetf32: typing.Optional[bool] = Nonelocal_rank: int = -1ddp_backend: typing.Optional[str] = Nonetpu_num_cores: typing.Optional[int] = Nonetpu_metrics_debug: bool = Falsedebug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = ''dataloader_drop_last: bool = Falseeval_steps: typing.Optional[float] = Nonedataloader_num_workers: int = 0dataloader_prefetch_factor: typing.Optional[int] = Nonepast_index: int = -1run_name: typing.Optional[str] = Nonedisable_tqdm: typing.Optional[bool] = Noneremove_unused_columns: typing.Optional[bool] = Truelabel_names: typing.Optional[typing.List[str]] = Noneload_best_model_at_end: typing.Optional[bool] = Falsemetric_for_best_model: typing.Optional[str] = Nonegreater_is_better: typing.Optional[bool] = Noneignore_data_skip: bool = Falsefsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = ''fsdp_min_num_params: int = 0fsdp_config: typing.Union[dict, str, NoneType] = Nonefsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = Noneaccelerator_config: typing.Union[dict, str, NoneType] = Nonedeepspeed: typing.Union[dict, str, NoneType] = Nonelabel_smoothing_factor: float = 0.0optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch'optim_args: typing.Optional[str] = Noneadafactor: bool = Falsegroup_by_length: bool = Falselength_column_name: typing.Optional[str] = 'length'report_to: typing.Union[NoneType, str, typing.List[str]] = Noneddp_find_unused_parameters: typing.Optional[bool] = Noneddp_bucket_cap_mb: typing.Optional[int] = Noneddp_broadcast_buffers: typing.Optional[bool] = Nonedataloader_pin_memory: bool = Truedataloader_persistent_workers: bool = Falseskip_memory_metrics: bool = Trueuse_legacy_prediction_loop: bool = Falsepush_to_hub: bool = Falseresume_from_checkpoint: typing.Optional[str] = Nonehub_model_id: typing.Optional[str] = Nonehub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'hub_token: typing.Optional[str] = Nonehub_private_repo: typing.Optional[bool] = Nonehub_always_push: bool = Falsegradient_checkpointing: bool = Falsegradient_checkpointing_kwargs: typing.Union[dict, str, NoneType] = Noneinclude_inputs_for_metrics: bool = Falseinclude_for_metrics: typing.List[str] = <factory>eval_do_concat_batches: bool = Truefp16_backend: str = 'auto'evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = Nonepush_to_hub_model_id: typing.Optional[str] = Nonepush_to_hub_organization: typing.Optional[str] = Nonepush_to_hub_token: typing.Optional[str] = Nonemp_parameters: str = ''auto_find_batch_size: bool = Falsefull_determinism: bool = Falsetorchdynamo: typing.Optional[str] = Noneray_scope: typing.Optional[str] = 'last'ddp_timeout: typing.Optional[int] = 1800torch_compile: bool = Falsetorch_compile_backend: typing.Optional[str] = Nonetorch_compile_mode: typing.Optional[str] = Nonedispatch_batches: typing.Optional[bool] = Nonesplit_batches: typing.Optional[bool] = Noneinclude_tokens_per_second: typing.Optional[bool] = Falseinclude_num_input_tokens_seen: typing.Optional[bool] = Falseneftune_noise_alpha: typing.Optional[float] = Noneoptim_target_modules: typing.Union[NoneType, str, typing.List[str]] = Nonebatch_eval_metrics: bool = Falseeval_on_start: bool = Falseuse_liger_kernel: typing.Optional[bool] = Falseeval_use_gather_object: typing.Optional[bool] = Falseaverage_tokens_across_devices: typing.Optional[bool] = Falsemax_length: typing.Optional[int] = 1024max_prompt_length: typing.Optional[int] = 512max_completion_length: typing.Optional[int] = Nonebeta: float = 0.1label_smoothing: float = 0.0loss_type: str = 'sigmoid'disable_dropout: bool = Truecpo_alpha: float = 1.0simpo_gamma: float = 0.5label_pad_token_id: int = -100padding_value: typing.Optional[int] = Nonetruncation_mode: str = 'keep_end'generate_during_eval: bool = Falseis_encoder_decoder: typing.Optional[bool] = Nonemodel_init_kwargs: typing.Optional[dict[str, typing.Any]] = Nonedataset_num_proc: typing.Optional[int] = None )

Parameters

  • learning_rate (float, optional, defaults to 1e-6) β€” Initial learning rate for AdamW optimizer. The default value replaces that of TrainingArguments.
  • max_length (int or None, optional, defaults to 1024) β€” Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator.
  • max_prompt_length (int or None, optional, defaults to 512) β€” Maximum length of the prompt. This argument is required if you want to use the default data collator.
  • max_completion_length (int or None, optional, defaults to None) β€” Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder.
  • beta (float, optional, defaults to 0.1) β€” Parameter controlling the deviation from the reference model. Higher Ξ² means less deviation from the reference model. For the IPO loss (loss_type="ipo"), Ξ² is the regularization parameter denoted by Ο„ in the paper.
  • label_smoothing (float, optional, defaults to 0.0) β€” Label smoothing factor. This argument is required if you want to use the default data collator.
  • loss_type (str, optional, defaults to "sigmoid") β€” Type of loss to use. Possible values are:

    • "sigmoid": sigmoid loss from the original DPO paper.
    • "hinge": hinge loss on the normalized likelihood from the SLiC paper.
    • "ipo": IPO loss from the IPO paper.
    • "simpo": SimPO loss from the SimPO paper.
  • disable_dropout (bool, optional, defaults to True) β€” Whether to disable dropout in the model.
  • cpo_alpha (float, optional, defaults to 1.0) β€” Weight of the BC regularizer in CPO training.
  • simpo_gamma (float, optional, defaults to 0.5) β€” Target reward margin for the SimPO loss, used only when the loss_type="simpo".
  • label_pad_token_id (int, optional, defaults to -100) β€” Label pad token id. This argument is required if you want to use the default data collator.
  • padding_value (int or None, optional, defaults to None) β€” Padding value to use. If None, the padding value of the tokenizer is used.
  • truncation_mode (str,optional, defaults to "keep_end") β€” Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start". This argument is required if you want to use the default data collator.
  • generate_during_eval (bool, optional, defaults to False) β€” If True, generates and logs completions from the model to W&B or Comet during evaluation.
  • is_encoder_decoder (bool or None, optional, defaults to None) β€” When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model.
  • model_init_kwargs (dict[str, Any] or None, optional, defaults to None) β€” Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string.
  • dataset_num_proc (int or None, optional, defaults to None) β€” Number of processes to use for processing the dataset.

Configuration class for the CPOTrainer.

Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.

< > Update on GitHub