--- license: mit language: - en base_model: - inclusionAI/Ling-mini-base-2.0-20T pipeline_tag: text-generation library_name: transformers tags: - moe --- # Ring-mini-linear-2.0

🤗 Hugging Face   |   ðŸ¤– ModelScope

## Introduction We are excited to announce the official open-source release of Ring-mini-linear-2.0! Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-mini-linear achieves the performance of a 8 B dense model while activating only 1.4 B parameters. This model was converted from [Ling-mini-base-2.0](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T), further trained on an additional 600 B tokens. When it comes to benchmarks, Ring-mini-linear-2.0 not only holds its own against standard attention models (like ring-mini-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with native support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.

Figure 1: Hybrid Linear Model Architecture

## Evaluation To properly evaluate the model's reasoning capabilities, we compared it against 3 other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 6 challenging reasoning benchmarks spanning mathematics, coding, and science. The results demonstrate that the performance of the hybrid linear architecture is by no means inferior to that of standard softmax attention; in fact, it even outperforms the other models on 3 of the benchmarks.

Figure 2: Model Performance Comparison

Here is a demo of a small Snake game, with the code generated by our model.

Figure 3: Snake Game

## Linear Attention, Highly Sparse,High-Speed Generation Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-mini-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance. The results are remarkable. In the prefill stage, Ring-mini-linear-2.0's performance is exceptional; when the context length exceeds 256k, its throughput is over 12 times higher than that of Qwen3-8B. Furthermore, in the high-concurrency decode stage, its capabilities are even more pronounced. For generation lengths exceeding 32k, its throughput easily surpasses 12 times that of Qwen3-8B.

Figure 4: Ring-mini-linear-2.0 prefill throughput

Figure 5: Ring-mini-linear-2.0 decode throughput

## Model Downloads
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: | | Ring-mini-linear-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-linear-2.0)
[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-linear-2.0)|
## Quickstart ### Requirements ```bash pip install flash-linear-attention==0.3.2 pip install transformers==4.56.1 ``` ### 🤗 Hugging Face Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ring-mini-linear-2.0" model = AutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompts = [ "Give me a short introduction to large language models." ] input_texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True ) input_texts.append(text) print(input_texts) model_inputs = tokenizer(input_texts, return_tensors="pt", return_token_type_ids=False, padding=True, padding_side='left').to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=8192, do_sample=False, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print("*" * 30) print(responses) print("*" * 30) ``` ### SGLang ```bash python -m sglang.launch_server \ --model-path \ --trust-remote-code \ --tp-size 1 \ --disable-radix-cache \ --json-model-override-args "{\"linear_backend\": \"seg_la\"}" ``` ### vLLM ## Citation