S1-Base: Scientific Foundation Model

中文版English

Model Introduction

The S1-Base Scientific Foundation Model is trained using professional scientific knowledge and data, serving as a general-purpose and specialized multimodal model for scientific tasks. It is designed for scientific reasoning scenarios and complex disciplinary tasks. At the current stage, the model adopts a heterogeneous Mixture-of-Experts (MoE) architecture, which can automatically “route” user queries to deeply customized large language models or domain-specific models (such as wave, spectrum, field, protein, and biological sequence models).

This repository provides the general scientific large language model from the S1-Base series. The model systematically learns and understands the core theories, laws, and specialized knowledge of the six major foundational disciplines: mathematics, physics, chemistry, earth science, astronomy, and biology. It is trained on 170 million scientific papers and fine-tuned with millions of high-quality scientific reasoning data through scientific instruction tuning and multi-disciplinary composite reinforcement learning. The model’s subject capabilities are further strengthened step-by-step using training strategies based on high school, undergraduate, and graduate-level curricula.

There are three parameter sizes available for this model: S1-Base-8B, S1-Base-32B, and S1-Base-671B. S1-Base-8B and S1-Base-32B are trained based on Qwen3-8B and Qwen3-32B, respectively, while S1-Base-671B is trained based on DeepSeek-R1-671B. All versions support a 32k context window.

Model Weights

S1-Base is open-sourced under the Apache 2.0 license. You can download the model weights from Huggingface or ModelScope.

Model Name Huggingface Link ModelScope Link
S1-Base-8B S1-Base-8B S1-Base-8B
S1-Base-32B S1-Base-32B S1-Base-32B
S1-Base-671B S1-Base-671B S1-Base-671B

Deployment

We recommend using vLLM to deploy S1-Base for efficient inference and OpenAI-compatible API services.

Quick start command example:

pip install vllm  
vllm serve <your_s1_model_path> --served-model-name S1-Base-671B  

The API request and response formats are basically consistent with OpenAI. Please refer to the official vLLM documentation for details.

Downloads last month
5
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support