Michael Luo commited on
Commit
e0a6c09
·
1 Parent(s): d84e819
.gitattributes CHANGED
@@ -17,6 +17,7 @@
17
  *.ot filter=lfs diff=lfs merge=lfs -text
18
  *.parquet filter=lfs diff=lfs merge=lfs -text
19
  *.pb filter=lfs diff=lfs merge=lfs -text
 
20
  *.pickle filter=lfs diff=lfs merge=lfs -text
21
  *.pkl filter=lfs diff=lfs merge=lfs -text
22
  *.pt filter=lfs diff=lfs merge=lfs -text
@@ -33,4 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
17
  *.ot filter=lfs diff=lfs merge=lfs -text
18
  *.parquet filter=lfs diff=lfs merge=lfs -text
19
  *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.png filter=lfs diff=lfs merge=lfs -text
21
  *.pickle filter=lfs diff=lfs merge=lfs -text
22
  *.pkl filter=lfs diff=lfs merge=lfs -text
23
  *.pt filter=lfs diff=lfs merge=lfs -text
 
34
  *.zip filter=lfs diff=lfs merge=lfs -text
35
  *.zst filter=lfs diff=lfs merge=lfs -text
36
  *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Agentica
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ datasets:
5
+ - R2E-Gym/R2E-Gym-Subset
6
+ language:
7
+ - en
8
+ base_model:
9
+ - Qwen/Qwen3-32B
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ <div align="center">
14
+ <span style="font-family: default; font-size: 1.5em;">DeepSWE-Preview</span>
15
+ <div>
16
+ 🚀 Democratizing Reinforcement Learning for LLM Agents (RLLM) 🌟
17
+ </div>
18
+ </div>
19
+ <br>
20
+ <div align="center" style="line-height: 1;">
21
+ <a href="https://github.com/agentica-project/rllm" style="margin: 2px;">
22
+ <img alt="Code" src="https://img.shields.io/badge/rLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
23
+ </a>
24
+ <a href="www.google.com" target="_blank" style="margin: 2px;">
25
+ <img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
26
+ </a>
27
+ <a href="https://x.com/Agentica_" style="margin: 2px;">
28
+ <img alt="X.ai" src="https://img.shields.io/badge/Agentica-white?style=for-the-badge&logo=X&logoColor=000&color=000&labelColor=white" style="display: inline-block; vertical-align: middle;"/>
29
+ </a>
30
+ <a href="https://huggingface.co/agentica-org" style="margin: 2px;">
31
+ <img alt="Hugging Face" src="https://img.shields.io/badge/Agentica-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor" style="display: inline-block; vertical-align: middle;"/>
32
+ </a>
33
+ </div>
34
+ </div>
35
+ </div>
36
+
37
+ ## DeepSWE Overview
38
+ DeepSWE-Preview is a fully open-sourced, state-of-the-art coding agent trained with only reinforcement learning (RL) to excel at software engineering (SWE) tasks. DeepSWE-Preview demonstrates strong reasoning capabilities in navigating complex codebases and viewing/editing multiple files, and it serves as a foundational model for future coding agents. The model achieves an impressive **59.0%** on SWE-Bench-Verified, which is currently #1 in the open-weights category.
39
+
40
+ DeepSWE-Preview is trained on top of Qwen3-32B with thinking mode enabled. With just 200 steps of RL training, SWE-Bench-Verified score increases from 23→42% (+20%).
41
+
42
+ Discover more about DeepSWE-Preview's development and capabilities in our [technical blog post](www.google.com).
43
+
44
+ <div style="margin: 0 auto;">
45
+ <img src="./figures/swebench.png" style="width: 100%;" />
46
+ <p align="center" style="margin-top: 8px; font-style: italic; color: #666;">
47
+ Figure 1: SWE-Bench-Verified Performance vs. Model Size for LLM Agents. Trained with only reinforcement learning (RL, no SFT), DeepSWE-Preview with test time scaling (TTS) solves 59% of problems, beating all open-source agents by a large margin. We note that DeepSWE-Preview's Pass@1 performance (42.1%, averaged over 16 runs) is one of best for open-weights coding agents.
48
+ </p>
49
+ </div>
50
+
51
+ ## Usage Recommendations
52
+
53
+ To get the best performance out of DeepSWE-Preview, we suggest setting:
54
+ - Temperature = 1
55
+ - Max tokens set to at least 32-64K.
56
+ - Use R2EGym's system/instance prompt and tools (`file_editor.py`, `execution_bash.py`, `search.py`, `finish.py`). See [here](https://github.com/agentica-project/R2E-Gym/blob/master/src/r2egym/agenthub/config/r2egym/edit_non_fn_calling.yaml) for more details.
57
+
58
+
59
+ ## Training Recipe
60
+
61
+ <div style="margin: 0 auto;">
62
+ <img src="./figures/swe_val_scores.png" style="width: 100%;" />
63
+ <p align="center" style="margin-top: 8px; font-style: italic; color: #666;">
64
+ Figure 2: Validation Score for SWE-Bench-Hard, where an agent receives positive reward if it submits the final answer and passes all tests. With just 200 steps of RL training, SWE-Bench-Verified score increases from 23→42% (+20%).
65
+ </p>
66
+ </div>
67
+
68
+
69
+ ### Data 🗄️
70
+
71
+ Our dataset contains 4.5K problems from a subset of `R2E-Gym`. To avoid data contamination during training, we filtered out problems that are derived from the same repositories as `SWE-Bench-Verified` , such as `sympy`. All problems map to individual Docker images.
72
+
73
+ ### Environment 🌐
74
+
75
+ Our environment wraps around `R2E-Gym`, an existing Gym environment for scalable curation of high-quality executable SWE environments.
76
+
77
+ **State & Action.** `R2E-Gym` defines a set of four tools as part of the action space. The output of each tool (a Python program with stdout/stderr) represents the returned state. More specifically:
78
+
79
+ - **Execute Bash** - Outputs both stdout and stderr of an LLM-generated bash command.
80
+ - **Search** - Searches and returns all occurrences of an LLM-defined query in either a directory or a single file.
81
+ - **File Editor** - Allows for viewing, creating, replacing strings, inserting, and undoing edits to a specific file.
82
+ - **Finish/Submit** - LLM has decided that it has resolved the pull request, which terminates trajectory generation.
83
+
84
+ **Reward.** To keep things simple, our reward function employs a sparse Outcome Reward Model (ORM):
85
+
86
+ - `1` - LLM’s generated patch passes a selected sample of tests (Pass2Pass and Fail2Pass) within a time limit. To accelerate training, our max time limit is 5 minutes, while the official SWE-Bench evaluation is 30 minutes.
87
+ - `0` - We assign no reward if the LLM’s code fails on at least one test case or times out.
88
+
89
+ ### RL Algorithm
90
+
91
+ We enhance the original GRPO algorithm, integrating insights from DAPO, Dr. GRPO, LOOP/RLOO, and our innovations to enable stable training and improved performance. Our final, amalgamate algorithm consists of:
92
+
93
+ - **Clip High (DAPO):** Increasing the upper bound of GRPO/PPO’s surrogate loss encourages exploration and stabilizes entropy.
94
+ - **No KL Loss (DAPO):** Eliminating KL loss prevents the LLM from being constrained to the trust region of the original SFT model.
95
+ - **No Reward Standard Deviation** **(Dr.GRPO):** Removing reward standard deviation removes difficulty bias in GRPO’s loss, ensuring hard and easy problems are better differentiated.
96
+ - **Length Normalization (Dr.GRPO):** Dividing surrogate loss by max context length removes length bias present in GRPO, which increases the length of incorrect responses.
97
+ - **Leave One Out (Loop/RLOO):** Removing one sample for advantage estimation reduces variance for policy gradient without introducing bias.
98
+ - **Compact Filtering** **(Us):** Inspired by DAPO, we mask the loss for trajectories that reach max context length, timeout during generation (20 minutes), or reach maximum steps.
99
+ - **No Entropy Loss (Us):** Entropy loss introduces higher instability and eventually leads to exponentially increasing entropy, which collapses training. Provided that the base model’s token-level entropy is within 0.3-1, entropy loss is not needed.
100
+
101
+ A more detailed description of the training recipe can be found in our [blog post](www.google.com).
102
+
103
+
104
+ ## Evaluation
105
+
106
+ DeepSWE-Preview is evaluated via the official `R2E-Gym` codebase at 64k max context length and 100 max enviornment steps. DeepSWE's generated patches are then ported over to the offical SWE-bench repo to calculate final score. Below, We report Pass@1 accuracy averaged over 16 runs.
107
+
108
+ | Model | Scaffold | Type | SWE-Bench Verified (%) |
109
+ |-------|----------|------|------------------------|
110
+ | DeepSWE-Preview (32B) | R2E-Gym | Agent + Hybrid Best@16 | 59% |
111
+ | DeepSWE-Preview (32B) | R2E-Gym | Agent + Hybrid Best@8 | 57.9% |
112
+ | DeepSWE-Preview (32B) | R2E-Gym | Agent | 42.2% |
113
+ | Devstral-Small (24B) | OpenHands | Agent | 46.6% |
114
+ | Openhands-LM (32B) | OpenHands | Agent (Iterative) | 37.2% |
115
+ | SWE-Agent-LM (32B) | SWE-Agent | Agent | 40.2% |
116
+ | R2EGym-Agent (32B) | R2E-Gym | Agent | 34.4% |
117
+ | Skywork-SWE (32B) | OpenHands | Agent | 38.0% |
118
+ | Skywork-SWE (32B) | OpenHands | Agent + Execution-Free Best@8 | 47.0% |
119
+ | SkyRL-Agent (14B) | OpenHands | Agent | 21.6% |
120
+
121
+ ### Test-time Scaling
122
+
123
+ <div style="margin: 0 auto;">
124
+ <img src="./figures/bestk_plot_agent.png" style="width: 100%;" />
125
+ <p align="center" style="margin-top: 8px; font-style: italic; color: #666;">
126
+ Figure 3: SWE-Bench Verified Performance w.r.t. different TTS strategies. With hybrid TTS, DeepSWE-Preview achieves 59%, beating the current SOTA open-weights model (SkyWork + TTS, 47%) by 12%. We note that only using execution-based and execution-free verifiers is still effective and can bring 10+% performance.
127
+ </p>
128
+ </div>
129
+
130
+
131
+ ## Serving DeepSWE-Preview
132
+
133
+ Our model can be served using popular high-performance inference systems:
134
+ - vLLM
135
+ - Hugging Face Text Generation Inference (TGI)
136
+ - SGLang
137
+ - TensorRT-LLM
138
+
139
+ All these systems support the OpenAI Chat Completions API format.
140
+
141
+ ### vLLM (Recommended)
142
+
143
+ We suggest using `vllm>=0.8.5` and enabling long context in VLLM to serve DeepSWE-Preview.
144
+
145
+ ```bash
146
+ export MAX_CONTEXT_LEN=65536
147
+ export TENSOR_PARALLEL_SIZE=8
148
+ VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve agentica-org/DeepSWE-Preview --tensor-parallel-size $TENSOR_PARALLEL_SIZE --max-model-len $MAX_CONTEXT_LEN --hf-overrides '{"max_position_embeddings": $MAX_CONTEXT_LEN}' --enable_prefix_caching
149
+ ```
150
+
151
+ ## License
152
+ This project is released under the MIT License, reflecting our commitment to open and accessible AI development.
153
+ We believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.
154
+ This permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.
155
+
156
+ ## Acknowledgement
157
+ - Our training experiments are powered by [rLLM](https://github.com/agentica-project/rllm), which builds on top of [Verl](https://github.com/agentica-project/verl), an open-source RLHF library.
158
+ - Our model is trained on top of [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B).
159
+ - Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).
160
+
161
+ ## Citation
162
+ ```bibtex
163
+ @misc{deepswe2025,
164
+ title={DeepSWE: Training a State-of-the-Art Coding Agent from Scratch by Scaling RL},
165
+ author={Michael Luo, Naman Jain, Jaskirat Singh, Sijun Tan, Ameen Patel, Qingyang Wu, Alpay Ariyak, Colin Cai, Tarun Venkat, Shang Zhu, Ben Athiwaratkun, Manan Roongta, Ce Zhang, Li Erran Li, Raluca Ada Popa, Koushik Sen, Ion Stoica},
166
+ howpublished={\url{N/A}},
167
+ note={Notion Blog},
168
+ year={2025}
169
+ }
170
+ ```
figures/bestk_plot_agent.png ADDED

Git LFS Details

  • SHA256: f6503dea8049dd709774c1f6cd1837867f5756ec82c24306a91834e30f66e767
  • Pointer size: 131 Bytes
  • Size of remote file: 927 kB
figures/swe_val_scores.png ADDED

Git LFS Details

  • SHA256: 69093b68f3ae9a84535e988826810285f211e78bd10738d8f561f086de72d1e1
  • Pointer size: 131 Bytes
  • Size of remote file: 248 kB
figures/swebench.png ADDED

Git LFS Details

  • SHA256: 5b2c1c3a6cdd6ae13454f514ca2df4ec960b972468aa8757cac0a41414aac706
  • Pointer size: 131 Bytes
  • Size of remote file: 417 kB