|
--- |
|
license: apache-2.0 |
|
--- |
|
# S1-Base: Scientific Foundation Model |
|
|
|
[中文版](./README_zh.md) | [English](./README.md) |
|
|
|
## Model Introduction |
|
|
|
The S1-Base Scientific Foundation Model is trained using professional scientific knowledge and data, serving as a general-purpose and specialized multimodal model for scientific tasks. It is designed for scientific reasoning scenarios and complex disciplinary tasks. At the current stage, the model adopts a heterogeneous Mixture-of-Experts (MoE) architecture, which can automatically “route” user queries to deeply customized large language models or domain-specific models (such as wave, spectrum, field, protein, and biological sequence models). |
|
|
|
This repository provides the general scientific large language model from the S1-Base series. The model systematically learns and understands the core theories, laws, and specialized knowledge of the six major foundational disciplines: mathematics, physics, chemistry, earth science, astronomy, and biology. It is trained on 170 million scientific papers and fine-tuned with millions of high-quality scientific reasoning data through scientific instruction tuning and multi-disciplinary composite reinforcement learning. The model’s subject capabilities are further strengthened step-by-step using training strategies based on high school, undergraduate, and graduate-level curricula. |
|
|
|
There are three parameter sizes available for this model: S1-Base-8B, S1-Base-32B, and S1-Base-671B. S1-Base-8B and S1-Base-32B are trained based on [Qwen3-8B](https://github.com/QwenLM/Qwen3) and [Qwen3-32B](https://github.com/QwenLM/Qwen3), respectively, while S1-Base-671B is trained based on [DeepSeek-R1-671B](https://github.com/deepseek-ai/DeepSeek-R1). All versions support a 32k context window. |
|
|
|
## Model Weights |
|
|
|
S1-Base is open-sourced under the Apache 2.0 license. You can download the model weights from [Huggingface](https://huggingface.co/collections/ScienceOne-AI/s1-base-687a2373fde4791bc6c761f0) or [ModelScope](https://modelscope.cn/collections/S1-Base-66b70cf6e51c48). |
|
|
|
| Model Name | Huggingface Link | ModelScope Link | |
|
|----------------|----------------------|-------------------------| |
|
| S1-Base-8B | [S1-Base-8B](https://huggingface.co/ScienceOne-AI/S1-Base-8B) | [S1-Base-8B](https://modelscope.cn/models/ScienceOne-AI/S1-Base-8B) | |
|
| S1-Base-32B | [S1-Base-32B](https://huggingface.co/ScienceOne-AI/S1-Base-32B) | [S1-Base-32B](https://modelscope.cn/models/ScienceOne-AI/S1-Base-32B) | |
|
| S1-Base-671B | [S1-Base-671B](https://huggingface.co/ScienceOne-AI/S1-Base-671B) | [S1-Base-671B](https://modelscope.cn/models/ScienceOne-AI/S1-Base-671B) | |
|
|
|
## Deployment |
|
|
|
We recommend using [vLLM](https://github.com/vllm-project/vllm) to deploy S1-Base for efficient inference and OpenAI-compatible API services. |
|
|
|
**Quick start command example:** |
|
```bash |
|
pip install vllm |
|
vllm serve <your_s1_model_path> --served-model-name S1-Base-671B |
|
``` |
|
The API request and response formats are basically consistent with OpenAI. Please refer to the official vLLM documentation for details. |