HelpSteer3-Preference: Open Human-Annotated Preference Data across Diverse Tasks and Languages Paper • 2505.11475 • Published May 16 • 3
Dedicated Feedback and Edit Models Empower Inference-Time Scaling for Open-Ended General-Domain Tasks Paper • 2503.04378 • Published Mar 6 • 7
HelpSteer2-Preference: Complementing Ratings with Preferences Paper • 2410.01257 • Published Oct 2, 2024 • 25
HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM Paper • 2311.09528 • Published Nov 16, 2023 • 2
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF Paper • 2310.05344 • Published Oct 9, 2023 • 1
HelpSteer2: Open-source dataset for training top-performing reward models Paper • 2406.08673 • Published Jun 12, 2024 • 19
NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment Paper • 2405.01481 • Published May 2, 2024 • 31