Pilot-3B is designed to be a draft model in efficient preference alignment of LLMs for its small size while high performance in general domains. It is trained from Llama-3.2-3B-Instruct on GenerAlign.
Related links:
Paper: Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding
Github: Weak-to-Strong-Decoding
Dataset: GenerAlign
⚠️Caution
Pilot-3B is not guaranteed always to provide safe and correct responses. Please use it at your own risk.
Citation
If you find this work useful, please consider citing:
@misc{song2025well,
title={Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding},
author={Song, Feifan and Wei, Shaohang and Luo, Wen and Fan, Yuxuan and Liu, Tianyu and Wang, Guoyin and Wang, Houfeng},
year={2025},
eprint={2506.07434},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 33
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for songff/Pilot-3B
Base model
meta-llama/Llama-3.2-3B-Instruct