user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.57k
__index_level_0__
int64
0
8.05k
HuggingFaceDocBuilderDev
2024-11-11T21:19:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2348). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,348
500
muellerzr
2024-11-11T22:33:16
Beautiful! 🔥
2,348
501
HuggingFaceDocBuilderDev
2024-11-11T13:32:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2347). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,347
502
qgallouedec
2024-11-11T12:30:12
Can you point the "previous version" you are refering to?
2,346
503
qgallouedec
2024-11-11T16:17:38
I think it has been like this from the initial implementation (see #2020)
2,346
504
Galaxy-Husky
2024-11-11T16:56:32
> I think it has been like this from the initial implementation (see #2020) Sorry, I didn't say that right. I mean before v0.11.0, there was no `maybe_apply_chat_template` back then. For example, the dpo dataset was preprocessed like: https://github.com/huggingface/trl/blob/55cc4b1076144b74a6ce5d07557b7f664b1de8d9/examples/scripts/dpo.py#L156-L160 Since the code has been refactored , I'm not sure if there was generation prompt or not. If so, could you please point out where it was implemented?
2,346
505
qgallouedec
2024-11-11T17:23:09
Yes the example code was wrong, you need to add a generation prompt at the end of the prompt.
2,346
506
Galaxy-Husky
2024-11-11T17:24:31
> Yes the example code was wrong, you need to add a generation prompt at the end of the prompt. I see. Thanks a lot!
2,346
507
HuggingFaceDocBuilderDev
2024-11-11T12:02:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2345). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,345
508
qgallouedec
2024-11-11T12:21:29
Why do you need the model to be un eval mode? Can we use the inference mode in forward instead?
2,345
509
kashif
2024-11-14T10:29:56
@ qgallouedec using inference mode so there should be no unexpected behaviour
2,345
510
qgallouedec
2024-11-11T19:51:09
very nice @ccs96307! looking into details
2,344
511
HuggingFaceDocBuilderDev
2024-11-11T19:57:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2344). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,344
512
qgallouedec
2024-11-18T10:54:04
Thanks a lot @ccs96307 for your contribution!
2,344
513
HuggingFaceDocBuilderDev
2024-11-11T12:52:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2343). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,343
514
qgallouedec
2024-11-11T13:04:58
It should be fixed by #2325. Could you confirm?
2,342
515
asparius
2024-11-11T22:58:45
Saving issue is solved but training time duration has increased significantly, 1 million episodes taking 300+ hours on A100. Is this expected, is there any reference number to compare with?
2,342
516
qgallouedec
2024-11-14T11:09:48
I can't reproduce: ``` # v0.12.1 (includes the fix); transformers 4.47 dev (blue) /fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --dataset_name trl-internal-testing/tldr-preference-sft-trl-style --dataset_test_split validation --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb ``` ``` # TRL v0.11 (doesn't include the fix); transformers v4.45 (red) /fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb ``` ![W B Chart 14_11_2024, 12_08_20](https://github.com/user-attachments/assets/eed3ec12-9b00-4860-b356-f50c68a9e6ee)
2,342
517
sahandrez
2024-11-27T13:59:23
This issue still persists on `trl==0.12.1`. The network usage after 100k starts to spike and checkpoints are saved every 2 steps, regardless of the value of for saving steps. ![image](https://github.com/user-attachments/assets/aa1cd489-53c5-4778-a589-b54489c5138e)
2,342
518
Shreyas-Bhat
2024-11-14T15:52:49
Hi @shashankg7 , I have the exact same question. Do you have the answer to this? Thanks
2,341
519
shashankg7
2024-11-14T16:01:28
Kind of. To train in a mini-batch and multi-epoch mode with the samples collected from the current policy, plain REINFORCE/policy-gradient will not work, since the model changes from the policy used to collect the data. Importance sampling trick is required to account for the change in action distribution. But that's just my guess, there might be some other reason as well.
2,341
520
Shreyas-Bhat
2024-11-14T16:11:27
Thanks a lot for your prompt response, @shashankg7 ! That makes more sense now. I had another question and was wondering if you face the same: during training, do your model logits tend to high negative values (often -inf)?
2,341
521
shashankg7
2024-11-27T19:34:02
Hey @Shreyas-Bhat, missed your post. I am trying out RLOO in a different context, so I didn't try with the current setup, sorry. Did you manage to control/resolve the high negative error?
2,341
522
qgallouedec
2024-11-10T03:01:22
We know that a lot of notebooks/docs are outdated. Sorry for the inconvenience. It was a deliberate choice that has allowed us to move faster on the lib evolution. For more information, see https://github.com/huggingface/trl/pull/2174#issuecomment-2399843454. But you can be sure that it will soon be completely up to date. Most doc and notebooks should work with `trl==0.11` I agree with you that the notebooks should mention it. Feel free to open a PR it that sense if you wan't to contribute
2,340
523
Debolena7
2024-11-10T11:12:12
Thank you so much for your prompt reply. Changing the package trl version resolved the errors. I have been trying several code examples of rlhf from huggingface and also from youtube for a week now, and all had multiple issues. Was stuck for so many days. Thanks again..
2,340
524
Mrinh212375
2024-11-14T07:31:27
@Debolena7 @qgallouedec ... ```` config = PPOConfig( #model_name="google/gemma-2-2b-it", learning_rate=1.41e-5, mini_batch_size=5, batch_size=20, output_dir='/kaggle/working/' ) ppo_trainer = PPOTrainer(config=config, processing_class = 'PreTrainedTokenizerBase' , policy = model, ref_policy = ref_model, reward_model = rm_model, #tokenizer=tokenizer, train_dataset=ppo_training_dataset, data_collator=collator) ```` when I'm trying to run the above code snippet, I'm getting the following error - ![image](https://github.com/user-attachments/assets/9d3c0a08-2276-4a58-9c81-e2bf5e52c955) How to pass the module from the HF preTrainedWrapper class ?
2,340
525
ioana-ghiban-arm
2024-11-19T09:55:52
hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error: `TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance.
2,340
526
Debolena7
2024-11-19T20:28:25
> policy = model, > ref_policy = ref_model, > reward_model = rm_model, @Mrinh212375 I faced the same issue. So this error is basically caused because, the value model is not being passed in the 'PPOTrainer' arguments. So, by default, the value_model is None, which leads to the error. To solve it, you can either initialize a value model like: `value_model = AutoModelForSequenceClassification.from_pretrained("model_name")` and pass the value model into the 'PPOTrainer' , OR just simply use old `trl==0.11.0`
2,340
527
Debolena7
2024-11-19T20:38:20
> hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error: `TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance. @ioana-ghiban-arm You can pass your model tokenizer into the 'processing_class' argument of PPOTrainer. `tokenizer = AutoTokenizer.from_pretrained(model_id)` ``` ppo_trainer = PPOTrainer(config=config, processing_class = tokenizer, .................) ```
2,340
528
ioana-ghiban-arm
2024-11-20T08:59:29
@Debolena7 thank you for your help! you're right, I tried your suggestion and I think the execution got further. Now I'm getting the error I'd see when running a simplified version of the script. Do you perhaps have some troubleshooting steps for this error: `AttributeError: 'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'`? TIA
2,340
529
Debolena7
2024-11-20T10:39:23
it seems you have used something like: `model = AutoModelForCausalLMWithValueHead.from_pretrained(model_id)` which lead to the error.. you can use: `from transformers import GenerationConfig ` `model.generation_config = GenerationConfig()` , after initialization. But i would suggest it is best to use an old trl==0.11.0. otherwise, you will encounter more errors.
2,340
530
ioana-ghiban-arm
2024-11-22T14:12:02
thank you for your help. Indeed, changing to `trl==0.11` does get the training going. However, I'm seeing this warning: `UserWarning: The average ratio of batch (...) exceeds threshold 10.00. Skipping batch.` which as mentioned [here](https://github.com/huggingface/trl/issues/1031) _suggests that the updates to the policy are too large, which could lead to instability in the training_. The maintainer suggested using [ppo.py](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo/ppo.py) instead, so I tried adapting that script to use the toxicity model and dataset. However, as that is an updated script I'm assuming it should be ran with the latest version of trl provided by the repo. That leads me to the error that this thread started with.. Any suggestion to help me stop going in circles and be able to run a first round of fine-tuning on this model would be greatly appreciated, thank you.
2,340
531
Charley-xiao
2024-12-17T17:16:12
I'm also getting the same `UserWarning: The average ratio of batch (13.00) exceeds threshold 10.00. Skipping batch.` kind of warning, using the exact toxicity example provided in the repo. Not sure if it would affect the results. I'd really appreciate it if someone could explain this🤔
2,340
532
imrankh46
2024-11-08T06:46:07
@kashif any suggestions?
2,338
533
Sunrepe
2024-11-11T14:58:38
### I encountered the same problem. My System Info is: ''' - Python version: 3.10.14 - PyTorch version: 2.4.1 - CUDA device(s): NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.46.2 - Accelerate version: 0.34.2 - Accelerate config: not found - Datasets version: 3.0.1 - HF Hub version: 0.25.1 - TRL version: 0.12.0 - bitsandbytes version: not installed - DeepSpeed version: 0.15.1 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: 0.28.0 - PEFT version: 0.13.0 ''' Here’s a revised version of your text with the grammar corrected: --- I am using the code in `example/script/sft.py`. I have downloaded the dataset and model locally. So, I run the following terminal command: ```bash python sft.py \ --model_name_or_path /data1/llm_models/qwen-05B \ --dataset_name /data1/datasets/trl-lib/Capybara \ --learning_rate 2.0e-4 \ --num_train_epochs 1 \ --packing \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 100 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 \ --output_dir Qwen2-0.5B-SFT ``` ## However, I am encountering the following issue: ```python Traceback (most recent call last): File "/data1/tmpzxf/research/SwiftSage/df_models/sft.py", line 106, in <module> trainer.train() File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train return inner_training_loop( File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3579, in training_step loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3633, in compute_loss outputs = model(**inputs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 176, in forward inputs, module_kwargs = self.scatter(inputs, kwargs, self.device_ids) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 198, in scatter return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in scatter_kwargs scattered_kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 64, in scatter res = scatter_map(inputs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in scatter_map return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 51, in scatter_map return list(zip(*map(scatter_map, obj))) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 47, in scatter_map return Scatter.apply(target_gpus, None, dim, obj) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/autograd/function.py", line 574, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/_functions.py", line 96, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/comm.py", line 188, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams)) RuntimeError: chunk expects at least a 1-dimensional tensor ```
2,338
534
qGentry
2024-11-11T17:53:28
Looks like "num_items_in_batch" is getting added to the batch dict at some point by trl/tokenizer/collator and it is a 0-dim constant that is getting scattered across data parallel replicas but it can't.
2,338
535
hua-777
2024-11-12T22:06:44
Isolating my training to 1 GPU fixed this problem for me. ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "0"
2,338
536
Leo-T-Zang
2024-11-14T00:49:59
try transformers 4.45.1?
2,338
537
oscar50513
2024-11-14T10:07:41
I successfully tested Transformers 4.46.0!!!!
2,338
538
imrankh46
2024-11-14T13:38:12
I have some Nan entry in the dataset. And also change the code a little bit so it working for me.
2,338
539
yxdr
2024-11-15T04:22:45
I encountered the same problem when I used the following command to run my training script. ``` CUDA_VISIBLE_DEVICES=0,1 python train.py \ --seed=1 \ --model_path=$MODEL_PATH \ --processed_data_dir=$PROCESSED_DATA_DIR \ --output_dir=$OUTPUT_DIR \ --learning_rate=5e-6 \ --epochs=1 \ --save_freq=10 \ --eval_freq=10 \ --num_warmup_steps=30 ``` But when I switched to using Huggingface Accelerate to run it, the problem disappeared. ``` CUDA_VISIBLE_DEVICES=0,1 accelerate launch --num_processes 2 train.py \ --seed=1 \ --model_path=$MODEL_PATH \ --processed_data_dir=$PROCESSED_DATA_DIR \ --output_dir=$OUTPUT_DIR \ --learning_rate=5e-6 \ --epochs=1 \ --save_freq=10 \ --eval_freq=10 \ --num_warmup_steps=30 ``` Additionally, if you use only one GPU, there should be no problem either.
2,338
540
Suman-punshi
2024-11-15T08:31:11
I tried all the solutions above, reverting to single GPU and using accelerate, but it is still not solving the problem for me
2,338
541
kashif
2024-11-15T08:39:18
@Suman-punshi what is your TRL Env and versions?
2,338
542
Suman-punshi
2024-11-15T08:41:44
@kashif my TRL version 0.12.0
2,338
543
hojin-koh
2024-11-25T14:17:47
> Looks like "num_items_in_batch" is getting added to the batch dict at some point by trl/tokenizer/collator and it is a 0-dim constant that is getting scattered across data parallel replicas but it can't. Got the same problem in our training environment with 2 GPUs, with trl 0.12.1 and transformer 4.46.3. I was using SFTTrainer with DataCollatorForCompletionOnlyLM on llama3.1-8b base model. After some tracing it is indeed that `num_items_in_batch` (it's just a plain number) causing problems. Trying to split a scalar between two GPUs can't be good lol Stopping `Trainer.compute_loss()` in `trainer.py` from adding `num_items_in_batch` to `loss_kwargs` solved the issue, although I don't know if there are any bad side-effects in doing this...
2,338
544
yxdr
2024-11-25T14:54:48
> > Looks like "num_items_in_batch" is getting added to the batch dict at some point by trl/tokenizer/collator and it is a 0-dim constant that is getting scattered across data parallel replicas but it can't. > > Got the same problem in our training environment with 2 GPUs, with trl 0.12.1 and transformer 4.46.3. I was using SFTTrainer with DataCollatorForCompletionOnlyLM on llama3.1-8b base model. After some tracing it is indeed that `num_items_in_batch` (it's just a plain number) causing problems. Trying to split a scalar between two GPUs can't be good lol > > Stopping `Trainer.compute_loss()` in `trainer.py` from adding `num_items_in_batch` to `loss_kwargs` solved the issue, although I don't know if there are any bad side-effects in doing this... It does have side-effects when you set gradient_accumulation>1, because `num_items_in_batch` is used for averaging the losses when gradient_accumulation>1.
2,338
545
ag8
2024-11-27T03:47:44
Using `accelerate launch` and adding the `--ddp_find_unused_parameters False` flag fixed the issue for me!
2,338
546
arivero
2024-11-29T16:50:43
Interestingly, `num_items_in_batch` is also the cause of other problem, `loss = loss / num_items_in_batch` fails telling that tensors should to be on the same device. Problems seem to disappear if ds_accelerator is installed too
2,338
547
arslion
2024-12-24T01:39:00
Nothing from above fixed my problem in Kaggle notebook. I want to use both GPU. Single GPU does fix the problem. But I need multi-gpu support. transformers version 4.46.3
2,338
548
ahuizxc
2025-01-08T12:39:08
> Nothing from above fixed my problem in Kaggle notebook. I want to use both GPU. Single GPU does fix the problem. But I need multi-gpu support. transformers version 4.46.3 try downgrade transformers to 4.45.1, it works for me :)
2,338
549
qgallouedec
2024-11-10T03:07:29
I agree. Not sure what's the best way to do that though, because it still has to work with the precomputing of ref logprobs. (that's why we initially set `"shuffle": False`). Any idea?
2,337
550
sagie-dekel
2024-11-26T18:31:55
Hi Does anyone know how to solve it? how to set "shuffle": True in the trainer Dataloader
2,337
551
HuggingFaceDocBuilderDev
2024-11-07T13:26:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2336). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,336
552
littleshutong
2024-11-08T11:59:05
trl/trainer/ppo_trainer.py ![image](https://github.com/user-attachments/assets/4f4ba132-f48a-48e2-8225-2f3c35b4df57) However, it is necessary to consider passing the parameters over.
2,335
553
ccs96307
2024-11-10T17:23:12
I encountered this issue previously and temporarily worked around it by adjusting the accelerate version to 0.34.2. Here are the versions I used: - accelerate==0.34.2 - torch==2.5.1 - transformers==4.46.2 - deepspeed==0.15.4
2,335
554
Galaxy-Husky
2024-11-20T07:00:40
@qgallouedec hi, do you have any suggestions?
2,334
555
qgallouedec
2024-11-07T21:02:47
As far as I understand, the grad accum thing is only an issue with SFT right?
2,333
556
kashif
2024-11-07T21:04:15
right i think its more about the updated kernels
2,333
557
HuggingFaceDocBuilderDev
2024-11-07T21:25:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2333). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,333
558
ByronHsu
2024-11-07T22:01:15
Yes grad accum is only used for sft. Beside grad accum, we also have other improvement
2,333
559
qgallouedec
2024-11-08T00:33:46
I approve, as this is an important issue affecting the most widely used trainer. (Thanks for solving it!) For the record, generally speaking, I won’t raise the minimum version requirement unless a new feature from the dependency is needed in our codebase.
2,333
560
kashif
2024-11-06T12:54:07
thanks @yanghh2000 would it be possible to add a test?
2,332
561
yanghh2000
2024-11-06T13:03:11
Hi, I am glad to help, but I am not sure how to add a test for this. Is there any guideline to test a PR?
2,332
562
yanghh2000
2024-11-06T13:15:42
Oh, I have read the guideline in trl/CONTRIBUTING.md, and what I need to do is add a test.py and commit it under test/ dir?
2,332
563
kashif
2024-11-06T13:19:41
yes in the `dpo_trainer` tests file
2,332
564
qgallouedec
2024-11-06T13:42:57
Tbh I'm not sure it is possible to test it considering it's in a middle of the method.
2,332
565
qgallouedec
2024-11-06T09:18:31
Good catch! Thanks! Do you mind opening a PR to fix that?
2,330
566
naskimed
2024-11-07T16:25:59
Hey, I have the same issue using PPOTrainer: "ValueError: Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` before using any functionality from the `accelerate` library". trl: 0.13.0.dev0 transformers: 4.46.2 accelerate: 1.1.0.dev0 ![Screenshot from 2024-11-07 17-24-25](https://github.com/user-attachments/assets/6f6144a3-21a7-4231-adce-1753127a602a)
2,329
567
KAKSIS
2024-11-08T09:19:43
> Hey, I have the same issue using PPOTrainer: "ValueError: Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` before using any functionality from the `accelerate` library". > > trl: 0.13.0.dev0 transformers: 4.46.2 accelerate: 1.1.0.dev0 ![Screenshot from 2024-11-07 17-24-25](https://private-user-images.githubusercontent.com/95038145/384049328-6f6144a3-21a7-4231-adce-1753127a602a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzEwNTc4NDgsIm5iZiI6MTczMTA1NzU0OCwicGF0aCI6Ii85NTAzODE0NS8zODQwNDkzMjgtNmY2MTQ0YTMtMjFhNy00MjMxLWFkY2UtMTc1MzEyN2E2MDJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTA4VDA5MTkwOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThlNGU1OTZhNTY5OGI1MjU0MDMwOWMzZjhkNWQ0NTdlZDBiMTRhMGNkMWJlNjAzNTQ2ZTQxZTc3OTBiZjhmNDQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.qKQiLP4wjRB8nwNya4VxfFJE4Je6UgYbSeejKzzChas) I have the same problem
2,329
568
kongjiellx
2024-11-08T12:03:09
+1 with PPOTrainer
2,329
569
macheng6
2024-11-11T08:23:25
After using the version configuration below, the code can be run: trl==0.11.4 accelerate==0.33.0,
2,329
570
leobianco
2024-11-27T09:20:38
Having the same problem with `RLOOTrainer`.
2,329
571
zwhe99
2024-12-19T15:18:04
+1
2,329
572
yananchen1989
2024-12-24T21:22:01
same error with `/home/yanan/trl/examples/scripts/ppo/ppo.py`. accelerate: 1.2.0.dev0 trl: 0.13.0 transformers: 4.46.1
2,329
573
yaswanthchittepu
2024-12-25T08:40:18
Same error with the ppo example script provided at the huggingface trl repo trl/examples/scripts/ppo/ppo_tldr.py, when using deepspeed zero2 acceelrate: 1.1.1 trl: 0.13.0.dev0 transformers: 4.47.0 deepspeed: 0.15.4
2,329
574
HuggingFaceDocBuilderDev
2024-11-05T17:41:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,328
575
HuggingFaceDocBuilderDev
2024-11-05T11:14:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2327). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,327
576
qgallouedec
2024-11-05T17:58:27
We've have an example script to train VLM with DPO [here](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo_vlm.py). Have you tried to run it with MiniCPM-V? At present, we're not claiming that you can use it with any VLM, as the level of standardization of VLMs is lower than that of LLMs. But it's definitely worth giving this one a try.
2,326
577
DarioPTWR
2024-11-11T06:56:47
Alright cool! Will try it out and provide an update, thanks for your response!
2,326
578
DarioPTWR
2024-11-27T07:58:43
Hi, I've tried to run the script with MiniCPM-v, but came across this error: (base) PS C:\Users\userAdmin\RLHF_V_MiniCPMV> accelerate launch dpo_vlm_2.py The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `0` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. MiniCPMForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions. - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception). - If you are not the owner of the model architecture class, please contact the model code owner to update it. Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.69it/s] Traceback (most recent call last): File "C:\Users\userAdmin\RLHF_V_MiniCPMV\dpo_vlm_2.py", line 78, in <module> main() File "C:\Users\userAdmin\RLHF_V_MiniCPMV\dpo_vlm_2.py", line 66, in main trainer = DPOTrainer( ^^^^^^^^^^^ File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py", line 101, in inner_f return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\transformers\utils\deprecation.py", line 165, in wrapped_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\trl\trainer\dpo_trainer.py", line 367, in __init__ model.enable_input_require_grads() File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\transformers\modeling_utils.py", line 18 self File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\transformers\modeling_utils.py", line 1873, in get_input_embeddings File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\transformers\modeling_utils.py", line 1873, in get_input_embeddings raise NotImplementedError NotImplementedError File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\transformers\modeling_utils.py", line 1873, in get_input_embeddings raise NotImplementedError et_input_embeddings raise NotImplementedError raise NotImplementedError NotImplementedError Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Scripts\accelerate.exe\__main__.py", line 7, in <module> File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\accelerate\commands\accelerate_cli.py", line 48, in main args.func(args) File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\accelerate\commands\launch.py", line 1168, in launch_command simple_launcher(args) File "c:\Users\userAdmin\RLHF_V_MiniCPMV\.venv\Lib\site-packages\accelerate\commands\launch.py", line 763, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['c:\\Users\\userAdmin\\RLHF_V_MiniCPMV\\.venv\\Scripts\\python.exe', 'dpo_vlm_2.py']' returned non-zero exit status 1. Seems like it has something to do with the GenerationMixin, is there any way to solve this? Thanks.
2,326
579
HuggingFaceDocBuilderDev
2024-11-04T19:11:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2325). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,325
580
HuggingFaceDocBuilderDev
2024-11-04T18:46:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2324). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,324
581
qgallouedec
2024-11-04T18:58:39
Thanks @fanconic! Do you have reference results to share?
2,323
582
qgallouedec
2024-11-18T10:58:45
Thanks for contributing @fanconic 👊
2,323
583
HuggingFaceDocBuilderDev
2024-11-18T11:02:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2323). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,323
584
HuggingFaceDocBuilderDev
2024-11-04T18:28:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2322). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,322
585
qgallouedec
2024-11-04T20:00:53
> Thanks, do you mind giving a little more detail in the description about why this is needed? Done, sorry about that
2,322
586
HuggingFaceDocBuilderDev
2024-11-04T15:10:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2321). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,321
587
HuggingFaceDocBuilderDev
2024-11-04T13:47:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2320). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,320
588
HuggingFaceDocBuilderDev
2024-11-04T11:47:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2319). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,319
589
HuggingFaceDocBuilderDev
2024-11-04T11:06:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2318). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,318
590
qgallouedec
2024-11-04T18:59:44
Closed by #2318, thanks for reporting @LuisVasquezBSC
2,317
591
chenyang399
2024-11-04T14:45:47
i find the problem , its because someone update the ppotrainer and ppoconfig, but didn't update the notebook, we need to pip install trl==0.11.3 to restore a older version.
2,314
592
chenyang399
2024-11-04T14:46:07
i hope the community to update the notebook
2,314
593
qgallouedec
2024-11-05T09:37:18
Yes, indeed, this has been discussed here, at https://github.com/huggingface/trl/pull/2174#issuecomment-2399843454. Sorry for the inconvenience. We're doing our best to update all the documentation, but it's a lot of work and help from the community would be greatly appreciated.
2,314
594
ZNP8b
2024-11-05T09:42:38
I've had the same problem with SFTTrainer. And i found trl docs: https://huggingface.co/docs/trl/index Here is PPO docs: https://huggingface.co/docs/trl/ppo_trainer#trl.PPOTrainer There is 2 classes PPOtrainer and PPOconfig. And i think updated version is expecting model_name inside PPOtrainer and not PPOconfig: ![image](https://github.com/user-attachments/assets/331e9c57-672c-4eba-a918-40d703f03bc4) Same thing happened with SFTtrainer: old SFTtrainer: ```Python trainer = SFTTrainer( model = model, tokenizer = tokenizer, train_dataset = dataset, dataset_text_field = "text", # keyword error max_seq_length = max_seq_length, # keyword error data_collator = DataCollatorForSeq2Seq(tokenizer = tokenizer), # keyword error dataset_num_proc = 2, # keyword error packing = False, # keyword error args = TrainingArguments( per_device_train_batch_size = 2, gradient_accumulation_steps = 4, warmup_steps = 5, # num_train_epochs = 1, max_steps = 60, learning_rate = 2e-4, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "linear", seed = 3407, output_dir = "outputs", report_to = "none", ), ) ``` Everything with keyword error is going inside TrainingArguments (now its SFTConfig) ```Python trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=dataset, # dataset_text_field="text", # max_seq_length=max_seq_length, # dataset_num_proc=2, # packing=False, args=SFTConfig( # SFTConfig instead TrainingArguments dataset_text_field="text", # Here now max_seq_length=max_seq_length, # Here now dataset_num_proc=2, # Here now packing=False, # Here now per_device_train_batch_size=2, gradient_accumulation_steps=4, warmup_steps=50, num_train_epochs=20, # max_steps=5, learning_rate=1e-4, fp16=not is_bfloat16_supported(), bf16=is_bfloat16_supported(), logging_steps=1, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, output_dir="outputs", report_to="none", ), ) ``` Try moving model_name outside PPOConfig inside PPOTrainer
2,314
595
qgallouedec
2024-11-05T09:59:47
Not sure to get your point.`model_name` is an argument of `PPOTrainer.create_model_card` (see your screenshot). Not `PPOConfig` nor `PPOTrainer`
2,314
596
chenyang399
2024-11-08T04:37:51
thanks guys
2,314
597
Debolena7
2024-11-10T00:16:57
All the notebooks are giving a lot of errors. `The` PPOTrainer class also has a lot of other arguments that are required to be passed. for example, 'processing_class' instead of 'tokenizer', 'policy', 'ref_policy', 'reward_model', 'value_model', 'train_dataset' and NO 'optimizer'. I resolved all of these by mentioning the correct arguments. But now i am stuck at a new error: ``` Traceback (most recent call last): File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 782, in convert_to_tensors tensor = as_tensor(value) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 738, in as_tensor return torch.tensor(value) ValueError: too many dimensions 'str' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/u/student/2020/ai20resch11003/RLHF_new/hf_example.py", line 235, in <module> for epoch, batch in tqdm(enumerate(ppo_trainer.dataloader)): File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/tqdm/std.py", line 1181, in __iter__ for obj in iterable: File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/accelerate/data_loader.py", line 552, in __iter__ current_batch = next(dataloader_iter) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ data = self._next_data() File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 673, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch return self.collate_fn(data) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/data/data_collator.py", line 271, in __call__ batch = pad_without_fast_tokenizer_warning( File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/data/data_collator.py", line 66, in pad_without_fast_tokenizer_warning padded = tokenizer.pad(*pad_args, **pad_kwargs) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3548, in pad return BatchEncoding(batch_outputs, tensor_type=return_tensors) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 240, in __init__ self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File "/u/student/2020/ai20resch11003/miniconda3/envs/rlhf_new/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 798, in convert_to_tensors raise ValueError( ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`filename` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ``` I am trying this notebook: https://github.com/huggingface/trl/blob/main/examples/research_projects/toxicity/scripts/gpt-j-6b-toxicity.py if these notebooks are outdated, please mention the correct 'trl' and 'transformers' package versions that are supposed to be installed before using these. that would help a lot. please help :(
2,314
598
qgallouedec
2024-11-05T10:08:52
Thanks for sharing this. We rely on `transformers.Trainer` to save checkpoints and push on the hub. I think this issue would be more relevant to [huggingface/transformers](https://github.com/huggingface/transformers).
2,313
599