File size: 3,969 Bytes
500f1b3
 
5127b4f
 
 
 
 
500f1b3
 
a17db83
500f1b3
5127b4f
 
 
500f1b3
5127b4f
 
500f1b3
5127b4f
 
 
500f1b3
5127b4f
500f1b3
5127b4f
 
500f1b3
5127b4f
 
 
 
c44a9d1
500f1b3
 
 
5127b4f
 
 
 
 
 
 
 
500f1b3
 
5127b4f
500f1b3
5127b4f
 
500f1b3
 
5127b4f
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
library_name: transformers
license: apache-2.0
datasets:
- UCSC-VLAA/STAR-1
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-8B
---

# 🌟 STAR-1: Safer Alignment of Reasoning LLMs with 1K Data

<p align="center">
📃 <a href="https://arxiv.org/abs/2504.01903" target="_blank">Paper</a> |🤗 <a href="https://huggingface.co/datasets/UCSC-VLAA/STAR-1" target="_blank">STAR-1 Data</a> | 🤗 <a href="https://huggingface.co/collections/UCSC-VLAA/star-1-67edda2a042e8ba3e955e522" target="_blank">STAR-1 Model</a> |  📚 <a href="https://ucsc-vlaa.github.io/STAR-1/" target="_blank">Project Page</a>
</p>

## Introduction
[**STAR-1**](https://huggingface.co/datasets/UCSC-VLAA/STAR-1) is a high-quality safety dataset designed to enhance safety alignment in large reasoning models (LRMs) like DeepSeek-R1.

- Built on the principles of diversity, deliberative reasoning, and rigorous filtering, STAR-1 integrates and refines data from multiple sources to provide policy-grounded reasoning samples.
- The dataset contains **1,000** carefully selected examples, each aligned with best safety practices through GPT-4o-based evaluation.
- Fine-tuning with STAR-1 leads to significant safety improvements across multiple benchmarks, with minimal impact on reasoning capabilities.

We open-sourced our [STAR1-R1-Distill-8B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-8B) model here, which is fine-tuned on [STAR-1](https://huggingface.co/datasets/UCSC-VLAA/STAR-1) dataset.

## Artifacts
### Data

| Dataset    | Num. of Sample | URL                                                                 |
|------------|----------------|----------------------------------------------------------------------|
| STAR-1     | 1K             | 🤗 [UCSC-VLAA/STAR-1](https://huggingface.co/datasets/UCSC-VLAA/STAR-1) |
| STAR 41K   | 41K            | 🤗 [UCSC-VLAA/STAR-41K](https://huggingface.co/datasets/UCSC-VLAA/STAR-41K) |
| STAR-benign-915   | 915            | 🤗 [UCSC-VLAA/STAR-benign-915](https://huggingface.co/datasets/UCSC-VLAA/STAR-benign-915) |



### Model
| Model                          | Type                                      | URL                                                                                   |
|--------------------------------|-------------------------------------------|----------------------------------------------------------------------------------------|
| `STAR1`-R1-Distill-1.5B        | R1-Distill-Qwen-1.5B trained on STAR-1    | 🤗 [UCSC-VLAA/STAR1-R1-Distill-1.5B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-1.5B) |
| `STAR1`-R1-Distill-7B          | R1-Distill-Qwen-7B trained on STAR-1      | 🤗 [UCSC-VLAA/STAR1-R1-Distill-7B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-7B)     |
| `STAR1`-R1-Distill-8B          | R1-Distill-Llama-8B trained on STAR-1     | 🤗 [UCSC-VLAA/STAR1-R1-Distill-8B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-8B)     |
| `STAR1`-R1-Distill-14B         | R1-Distill-Qwen-14B trained on STAR-1     | 🤗 [UCSC-VLAA/STAR1-R1-Distill-14B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-14B)   |
| `STAR1`-R1-Distill-32B         | R1-Distill-Qwen-32B trained on STAR-1     | 🤗 [UCSC-VLAA/STAR1-R1-Distill-32B](https://huggingface.co/UCSC-VLAA/STAR1-R1-Distill-32B)   |

## Evaluation
See our github [repo](https://github.com/UCSC-VLAA/STAR-1?tab=readme-ov-file#evaluation-sec-31).

## Acknowledgement
This work is partially supported by a gift from Open Philanthropy. We thank the NAIRR Pilot Program and the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs.


## Citation
```
@article{wang2025star1saferalignmentreasoning,
    title={STAR-1: Safer Alignment of Reasoning LLMs with 1K Data}, 
    author={Zijun Wang and Haoqin Tu and Yuhan Wang and Juncheng Wu and Jieru Mei and Brian R. Bartoldson and Bhavya Kailkhura and Cihang Xie},
    year={2025},
    journal = {arXiv preprint arXiv:2504.01903}
}