About ORPO
Collection
Contains some information and experiments fine-tuning LLMs using π€ `trl.ORPOTrainer`
β’
8 items
β’
Updated
β’
4
Stable Diffusion XL "A capybara, a killer whale, and a robot named Ultra being friends"
This is an ORPO fine-tune of mistralai/Mistral-7B-v0.1 with
alvarobartt/dpo-mix-7k-simplified.
β οΈ Note that the code is still experimental, as the ORPOTrainer PR is still not merged, follow its progress
at π€trl - ORPOTrainer PR.
ORPO: Monolithic Preference Optimization without Reference Model