GeeeekExplorer commited on
Commit
600c271
·
1 Parent(s): 3f13c8c

add readme

Browse files
Files changed (2) hide show
  1. README.md +118 -0
  2. inference/README.md +3 -2
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ base_model:
5
+ - deepseek-ai/DeepSeek-V3.1-Base
6
+ ---
7
+ # DeepSeek-V3.2-Exp
8
+
9
+ <!-- markdownlint-disable first-line-h1 -->
10
+ <!-- markdownlint-disable html -->
11
+ <!-- markdownlint-disable no-duplicate-header -->
12
+
13
+ <div align="center">
14
+ <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
15
+ </div>
16
+ <hr>
17
+ <div align="center" style="line-height: 1;">
18
+ <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
19
+ <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
20
+ </a>
21
+ <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
22
+ <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
23
+ </a>
24
+ <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
25
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
26
+ </a>
27
+ </div>
28
+ <div align="center" style="line-height: 1;">
29
+ <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
30
+ <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
31
+ </a>
32
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
33
+ <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
34
+ </a>
35
+ <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
36
+ <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
37
+ </a>
38
+ </div>
39
+ <div align="center" style="line-height: 1;">
40
+ <a href="LICENSE" style="margin: 2px;">
41
+ <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
42
+ </a>
43
+ </div>
44
+
45
+ ## Introduction
46
+
47
+
48
+ We are excited to announce the official release of DeepSeek-V3.2-Exp, an experimental version of our model. As an intermediate step toward our next-generation architecture, V3.2-Exp builds upon V3.1-Terminus by introducing DeepSeek Sparse Attention—a sparse attention mechanism designed to explore and validate optimizations for training and inference efficiency in long-context scenarios.
49
+
50
+ This experimental release represents our ongoing research into more efficient transformer architectures, particularly focusing on improving computational efficiency when processing extended text sequences.
51
+
52
+ <div align="center">
53
+ <img src="cost.jpg" >
54
+ </div>
55
+
56
+ - DeepSeek Sparse Attention (DSA) achieves fine-grained sparse attention for the first time, delivering substantial improvements in long-context training and inference efficiency while maintaining virtually identical model output quality.
57
+
58
+
59
+ - To rigorously evaluate the impact of introducing sparse attention, we deliberately aligned the training configurations of DeepSeek-V3.2-Exp with V3.1-Terminus. Across public benchmarks in various domains, DeepSeek-V3.2-Exp demonstrates performance on par with V3.1-Terminus.
60
+
61
+
62
+ | Benchmark | DeepSeek-V3.2 | DeepSeek-V3.1-Terminus |
63
+ | :--- | :---: | :---: |
64
+ | **Reasoning Mode w/o Tool Use** | | |
65
+ | MMLU-Pro | 85.0 | 85.0 |
66
+ | GPQA-Diamond | 79.9 | 80.7 |
67
+ | Humanity's Last Exam | 19.8 | 21.7 |
68
+ | LiveCodeBench | 74.1 | 74.9 |
69
+ | AIME 2025 | 89.3 | 88.4 |
70
+ | HMMT 2025 | 83.6 | 86.1 |
71
+ | Codeforces | 2121 | 2046 |
72
+ | Aider-Polyglot | 74.5 | 76.1 |
73
+ | **Agentic Tool Use** | | |
74
+ | BrowseComp | 40.1 | 38.5 |
75
+ | BrowseComp-zh | 47.9 | 45.0 |
76
+ | SimpleQA | 97.1 | 96.8 |
77
+ | SWE Verified | 67.8 | 68.4 |
78
+ | SWE-bench Multilingual | 57.9 | 57.8 |
79
+ | Terminal-bench | 37.7 | 36.7 |
80
+
81
+
82
+
83
+ ## How to Run Locally
84
+
85
+ We provide an updated inference demo code in the [inference](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp/tree/main/inference) folder to help the community quickly get started with our model and understand its architectural details.
86
+
87
+ First convert huggingface model weights to the the format required by our inference demo. Set `MP` to match your available GPU count:
88
+ ```bash
89
+ cd inference
90
+ export EXPERTS=256
91
+ python convert.py --hf-ckpt-path ${HF_CKPT_PATH} --save-path ${SAVE_PATH} --n-experts ${EXPERTS} --model-parallel ${MP}
92
+ ```
93
+
94
+ Launch the interactive chat interface and start exploring DeepSeek's capabilities:
95
+ ```bash
96
+ export CONFIG=config_671B_v3.2.json
97
+ torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --interactive
98
+ ```
99
+
100
+
101
+
102
+ ## License
103
+
104
+ This repository and the model weights are licensed under the [MIT License](LICENSE).
105
+
106
+ ## Citation
107
+
108
+ ```
109
+ @misc{deepseekai2024deepseekv32,
110
+ title={DeepSeek-V3.2-Exp: Boosting Long-Context Efficiency with DeepSeek Sparse Attention},
111
+ author={DeepSeek-AI},
112
+ year={2025},
113
+ }
114
+ ```
115
+
116
+ ## Contact
117
+
118
+ If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
inference/README.md CHANGED
@@ -1,12 +1,13 @@
1
  # DeepSeek V3.2
2
 
3
- First convert huggingface model weight files to the format of this project.
4
  ```bash
 
5
  export EXPERTS=256
6
  python convert.py --hf-ckpt-path ${HF_CKPT_PATH} --save-path ${SAVE_PATH} --n-experts ${EXPERTS} --model-parallel ${MP}
7
  ```
8
 
9
- Then chat with DeepSeek model at will!
10
  ```bash
11
  export CONFIG=config_671B_v3.2.json
12
  torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --interactive
 
1
  # DeepSeek V3.2
2
 
3
+ First convert huggingface model weights to the the format required by our inference demo. Set `MP` to match your available GPU count:
4
  ```bash
5
+ cd inference
6
  export EXPERTS=256
7
  python convert.py --hf-ckpt-path ${HF_CKPT_PATH} --save-path ${SAVE_PATH} --n-experts ${EXPERTS} --model-parallel ${MP}
8
  ```
9
 
10
+ Launch the interactive chat interface and start exploring DeepSeek's capabilities:
11
  ```bash
12
  export CONFIG=config_671B_v3.2.json
13
  torchrun --nproc-per-node ${MP} generate.py --ckpt-path ${SAVE_PATH} --config ${CONFIG} --interactive