|
--- |
|
base_model: agentica-org/DeepCoder-1.5B-Preview |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- qwen2 |
|
- trl |
|
license: mit |
|
language: |
|
- en |
|
datasets: |
|
- UCSC-VLAA/STAR-1 |
|
--- |
|
|
|
## Inroduction |
|
SA stands for Safety and alignment. We fine tuned DeepCoder-1.5B-Preview with STAR-1 for 250 steps to enhance safety alignment using unsloth SFT cookbook. |
|
|
|
This model is fine-tuned with policy-grounded data to be safe and aligned with human values while coding. Specifically, it utilizes the STAR-1 dataset, which integrates diverse, deliberative reasoning examples evaluated rigorously by GPT-4o. This ensures the model maintains robust safety standards and minimizes biases, promoting responsible, secure, and effective coding practices without compromising its core reasoning capabilities. |
|
|
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{deepcoder2025, |
|
title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level}, |
|
author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica}, |
|
howpublished={\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}}, |
|
note={Notion Blog}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** EpistemeAI |
|
- **License:** MIT |
|
- **Finetuned from model :** agentica-org/DeepCoder-1.5B-Preview |
|
|
|
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |