michaelzhiluo commited on
Commit
1e1f473
·
verified ·
1 Parent(s): 41c1d5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -40,7 +40,7 @@ pipeline_tag: text-generation
40
  </div>
41
  </a>
42
 
43
- <a href="www.google.com" target="_blank" style="margin: 2px;">
44
  <img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
45
  </a>
46
  <a href="https://x.com/Agentica_" style="margin: 2px;">
@@ -62,7 +62,7 @@ DeepSWE-Preview is a fully open-sourced, state-of-the-art coding agent trained w
62
 
63
  DeepSWE-Preview is trained on top of Qwen3-32B with thinking mode enabled. With just 200 steps of RL training, SWE-Bench-Verified score increases by ~20%.
64
 
65
- Discover more about DeepSWE-Preview's development and capabilities in our [technical blog post](www.google.com).
66
 
67
  <div style="margin: 0 auto;">
68
  <img src="./figures/swebench.png" style="width: 100%;" />
@@ -121,7 +121,7 @@ We enhance the original GRPO algorithm, integrating insights from DAPO, Dr. GRPO
121
  - **Compact Filtering** **(Us):** Inspired by DAPO, we mask the loss for trajectories that reach max context length, timeout during generation (20 minutes), or reach maximum steps.
122
  - **No Entropy Loss (Us):** Entropy loss introduces higher instability and eventually leads to exponentially increasing entropy, which collapses training. Provided that the base model’s token-level entropy is within 0.3-1, entropy loss is not needed.
123
 
124
- A more detailed description of the training recipe can be found in our [blog post](www.google.com).
125
 
126
 
127
  ## Evaluation
 
40
  </div>
41
  </a>
42
 
43
+ <a href="https://pretty-radio-b75.notion.site/DeepSWE-Training-a-Fully-Open-sourced-State-of-the-Art[%E2%80%A6]-by-Scaling-RL-22281902c1468193aabbe9a8c59bbe33" target="_blank" style="margin: 2px;">
44
  <img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
45
  </a>
46
  <a href="https://x.com/Agentica_" style="margin: 2px;">
 
62
 
63
  DeepSWE-Preview is trained on top of Qwen3-32B with thinking mode enabled. With just 200 steps of RL training, SWE-Bench-Verified score increases by ~20%.
64
 
65
+ Discover more about DeepSWE-Preview's development and capabilities in our [technical blog post](https://pretty-radio-b75.notion.site/DeepSWE-Training-a-Fully-Open-sourced-State-of-the-Art[%E2%80%A6]-by-Scaling-RL-22281902c1468193aabbe9a8c59bbe33).
66
 
67
  <div style="margin: 0 auto;">
68
  <img src="./figures/swebench.png" style="width: 100%;" />
 
121
  - **Compact Filtering** **(Us):** Inspired by DAPO, we mask the loss for trajectories that reach max context length, timeout during generation (20 minutes), or reach maximum steps.
122
  - **No Entropy Loss (Us):** Entropy loss introduces higher instability and eventually leads to exponentially increasing entropy, which collapses training. Provided that the base model’s token-level entropy is within 0.3-1, entropy loss is not needed.
123
 
124
+ A more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepSWE-Training-a-Fully-Open-sourced-State-of-the-Art[%E2%80%A6]-by-Scaling-RL-22281902c1468193aabbe9a8c59bbe33).
125
 
126
 
127
  ## Evaluation