yuanshuai commited on
Commit
2e21bc3
·
verified ·
1 Parent(s): 8eebd31

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -71
README.md CHANGED
@@ -9,36 +9,30 @@ language:
9
  base_model:
10
  - Qwen/Qwen2.5-32B
11
  ---
12
- <div align="center">
13
-
14
- # Baichuan-M2-32B
15
 
16
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
17
  [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-yellow)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B)
18
 
19
- </div>
20
-
21
- ## 🌟 模型简介
22
-
23
- Baichuan-M2-32B 是百川智能推出的医疗增强推理模型,这是百川开源发布的第二个医疗增强模型,专为真实世界的医疗推理任务设计。该模型基于 Qwen2.5-32B 基座,通过创新的大型验证器系统(Large Verifier System)从真实世界的医疗问题出发,进行医疗领域后训练对齐,在保持模型通用能力的同时,实现了医疗效果的突破性提升。
24
 
25
- **模型特点:**
26
 
27
- Baichuan-M2 采用了三个核心技术创新:首先通过**大型验证器系统**,结合医疗场景特点设计了全面的医疗验证体系,包含患者模拟器和多维度验证机制;其次通过**医疗领域适应性增强**的中期训练(Mid-Training),在保持通用能力的同时实现轻量高效的医疗领域适应;最后采用**多阶段强化学习**策略,将复杂的 RL 任务分解为层次化的训练阶段,逐步提升模型的医学常识、推理和患者交互能力。
28
 
29
- **核心亮点:**
30
- - 🏆 **全球最强医疗开源模型**:在 HealthBench 评测集上超越所有开源模型及众多前沿闭源模型,是最接近 GPT-5 医疗能力的开源大模型
31
- - 🧠 **医生思维对齐**:基于真实病例数据和患者模拟器训练,具备临床诊断思维和鲁棒的医患交互能力
32
- - ⚡ **高效部署与推理**:支持 4bit 量化在 RTX4090 单卡部署,MTP 版本单用户场景下 token 吞吐提升 58.5%
33
 
 
 
 
 
34
 
 
35
 
36
- ## 📊 性能表现
37
 
38
- ### HealthBench指标
39
-
40
- | 模型名称 | HealthBench | HealthBench-Hard | HealthBench-Consensus |
41
- |----------|-------------|------------------|-----------------------|
42
  | Baichuan-M2 | 60.1 | 34.7 | 91.5 |
43
  | gpt-oss-120b | 57.6 | 30 | 90 |
44
  | Qwen3-235B-A22B-Thinking-2507 | 55.2 | 25.9 | 90.6 |
@@ -47,80 +41,83 @@ Baichuan-M2 采用了三个核心技术创新:首先通过**大型验证器系
47
  | Kimi-K2 | 43 | 10.7 | 90.9 |
48
  | gpt-oss-20b | 42.5 | 10.8 | 82.6 |
49
 
50
- ### 通用指标
51
 
52
- | 评测集 | Baichuan-M2-32B | Qwen3-32B |
53
- |--------|-----------------|-----------|
54
  | AIME24 | 83.4 | 81.4 |
55
  | AIME25 | 72.9 | 72.9 |
56
  | Arena-Hard-v2.0 | 45.8 | 44.5 |
57
  | CFBench | 77.6 | 75.7 |
58
  | WritingBench | 8.56 | 7.90 |
59
 
60
- *备注:AIME max_tokens 设为 64k,其他评测集设为 32k,温度统一为 0.6。*
61
-
62
-
63
- ## 🛠️ 技术特色
64
-
65
- ### 大型验证器系统
66
- - **患者模拟器**:基于真实病例构建的虚拟患者系统
67
- - **多维度验证**:医学准确性、回答完整性、追问感知等 8 个维度
68
- - **动态评分**:实时生成评分标准,适应复杂临床环境
69
 
70
- ### 医疗领域适应
71
- - **Mid-Training**:医疗知识注入的同时保持通用能力
72
- - **强化学习**:多阶段 RL 策略优化
73
- - **通专兼顾**:2:2:1 配比的医疗、通用、数学数据
74
 
75
- ## 🔧 快速开始
 
 
 
 
 
 
 
76
 
77
- ### 安装使用
78
 
79
- ```bash
80
- # 安装依赖
81
- pip install transformers torch vllm sglang
 
 
 
 
 
 
82
 
83
- # Transformers 使用
84
- from transformers import AutoTokenizer, AutoModelForCausalLM
85
- model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-M2-32B", trust_remote_code=True)
86
 
87
- # vLLM 使用(推荐)
88
- from vllm import LLM
89
- llm = LLM(model="baichuan-inc/Baichuan-M2-32B", trust_remote_code=True)
90
-
91
- # SGLang 使用
92
- python -m sglang.launch_server --model-path baichuan-inc/Baichuan-M2-32B
 
 
 
 
 
 
 
 
93
  ```
94
 
95
- ## ⚠️ 使用须知
96
-
97
- 1. **医疗免责声明**:本模型仅供研究和参考,不能替代专业医疗诊断和治疗建议
98
- 2. **适用场景**:医学教育、健康咨询、临床辅助决策等
99
- 3. **安全使用**:建议在专业医疗人员指导下使用
100
-
101
- ## 📄 许可证
102
-
103
- 本项目采用 [Apache License 2.0](LICENSE) 开源协议,欢迎研究和商业使用。
104
-
105
- ## 🤝 致谢
106
-
107
- - 基础模型:Qwen2.5-32B
108
- - 训练框架:VERL
109
- - 推理引擎:vLLM、SGLang
110
- - 量化方法:AutoRound、GPTQ
111
 
112
- 感谢开源社区的贡献,我们将持续回馈社区,推动医疗 AI 技术发展。
 
113
 
114
- ## 📞 联系我们
 
 
 
 
115
 
116
- - 更多资源:[百川智能官网](https://www.baichuan-ai.com)
117
 
118
- - 技术交流:[GitHub](https://github.com/baichuan-inc)
 
 
119
 
120
  ---
121
-
122
  <div align="center">
123
 
124
- **让AI助力医疗,让健康触手可及**
125
 
126
  </div>
 
 
9
  base_model:
10
  - Qwen/Qwen2.5-32B
11
  ---
12
+ # Baichuan-M2-32B-GPTQ-Int4
 
 
13
 
14
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
15
  [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-yellow)](https://huggingface.co/baichuan-inc/Baichuan-M2-32B)
16
 
17
+ ## 🌟 Model Overview
 
 
 
 
18
 
19
+ Baichuan-M2-32B is BaiChuan AI's medical-enhanced reasoning model, the second medical model released by BaiChuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.
20
 
21
+ **Model Features:**
22
 
23
+ Baichuan-M2 incorporates three core technical innovations: First, through the **Large Verifier System**, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through **medical domain adaptation enhancement** via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a **multi-stage reinforcement learning** strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the model's medical knowledge, reasoning, and patient interaction capabilities.
 
 
 
24
 
25
+ **Core Highlights:**
26
+ - 🏆 **World's Leading Open-Source Medical Model**: Outperforms all open-source models and many proprietary models on HealthBench, achieving medical capabilities closest to GPT-5
27
+ - 🧠 **Doctor-Thinking Alignment**: Trained on real clinical cases and patient simulators, with clinical diagnostic thinking and robust patient interaction capabilities
28
+ - ⚡ **Efficient Deployment**: Supports 4-bit quantization for single-RTX4090 deployment, with 58.5% higher token throughput in MTP version for single-user scenarios
29
 
30
+ ## 📊 Performance Metrics
31
 
32
+ ### HealthBench Scores
33
 
34
+ | Model Name | HealthBench | HealthBench-Hard | HealthBench-Consensus |
35
+ |------------|-------------|------------------|-----------------------|
 
 
36
  | Baichuan-M2 | 60.1 | 34.7 | 91.5 |
37
  | gpt-oss-120b | 57.6 | 30 | 90 |
38
  | Qwen3-235B-A22B-Thinking-2507 | 55.2 | 25.9 | 90.6 |
 
41
  | Kimi-K2 | 43 | 10.7 | 90.9 |
42
  | gpt-oss-20b | 42.5 | 10.8 | 82.6 |
43
 
44
+ ### General Performance
45
 
46
+ | Benchmark | Baichuan-M2-32B | Qwen3-32B |
47
+ |-----------|-----------------|-----------|
48
  | AIME24 | 83.4 | 81.4 |
49
  | AIME25 | 72.9 | 72.9 |
50
  | Arena-Hard-v2.0 | 45.8 | 44.5 |
51
  | CFBench | 77.6 | 75.7 |
52
  | WritingBench | 8.56 | 7.90 |
53
 
54
+ *Note: AIME uses max_tokens=64k, others use 32k; temperature=0.6 for all tests.*
 
 
 
 
 
 
 
 
55
 
56
+ ## 🔧 Technical Features
 
 
 
57
 
58
+ ### Large Verifier System
59
+ - **Patient Simulator**: Virtual patient system based on real clinical cases
60
+ - **Multi-Dimensional Verification**: 8 dimensions including medical accuracy, response completeness, and follow-up awareness
61
+ - **Dynamic Scoring**: Real-time generation of adaptive evaluation criteria for complex clinical scenarios
62
+ ### Medical Domain Adaptation
63
+ - **Mid-Training**: Medical knowledge injection while preserving general capabilities
64
+ - **Reinforcement Learning**: Multi-stage RL strategy optimization
65
+ - **General-Specialized Balance**: Carefully balanced medical, general, and mathematical composite training data
66
 
67
+ ## ⚙️ Quick Start
68
 
69
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.9.0` or to create an OpenAI-compatible API endpoint:
70
+ - SGLang:
71
+ ```shell
72
+ python -m sglang.launch_server --model-path baichuan-inc/Baichuan-M2-32B-GPTQ-Int4 --reasoning-parser qwen3
73
+ ```
74
+ - vLLM:
75
+ ```shell
76
+ vllm serve baichuan-inc/Baichuan-M2-32B-GPTQ-Int4 --reasoning-parser qwen3
77
+ ```
78
 
79
+ ## MTP inference with SGLang
 
 
80
 
81
+ 1. Replace the qwen2.py file in the sglang installation directory with draft/qwen2.py.
82
+ 2. Launch sglang:
83
+ ```
84
+ python3 -m sglang.launch_server \
85
+ --model Baichuan-M2-32B-GPTQ-Int4 \
86
+ --speculative-algorithm EAGLE3 \
87
+ --speculative-draft-model-path Baichuan-M2-32B-GPTQ-Int4/draft \
88
+ --speculative-num-steps 6 \
89
+ --speculative-eagle-topk 10 \
90
+ --speculative-num-draft-tokens 32 \
91
+ --mem-fraction 0.9 \
92
+ --cuda-graph-max-bs 2 \
93
+ --reasoning-parser qwen3 \
94
+ --dtype bfloat16
95
  ```
96
 
97
+ ## ⚠️ Usage Notices
98
+ 1. **Medical Disclaimer**: For research and reference only; cannot replace professional medical diagnosis or treatment
99
+ 2. **Intended Use Cases**: Medical education, health consultation, clinical decision support
100
+ 3. **Safe Use**: Recommended under guidance of medical professionals
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
+ ## 📄 License
103
+ Licensed under the [Apache License 2.0](LICENSE). Research and commercial use permitted.
104
 
105
+ ## 🤝 Acknowledgements
106
+ - Base Model: Qwen2.5-32B
107
+ - Training Framework: verl
108
+ - Inference Engines: vLLM, SGLang
109
+ - Quantization: AutoRound, GPTQ
110
 
111
+ Thank you to the open-source community. We commit to continuous contribution and advancement of healthcare AI.
112
 
113
+ ## 📞 Contact Us
114
+ - Resources: [BaiChuan AI Website](https://www.baichuan-ai.com)
115
+ - Technical Support: [GitHub](https://github.com/baichuan-inc)
116
 
117
  ---
 
118
  <div align="center">
119
 
120
+ **Empowering Healthcare with AI, Making Health Accessible to All**
121
 
122
  </div>
123
+