---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-72B-Instruct
pipeline_tag: text-generation
---
# MindLink
[English](README.md) | [δΈζ](README_CN.md)
## Model Description
We introduce **MindLink**, a new family of large language models developed by **Kunlun Inc**. Built on **Qwen**, these models incorporate our latest advances in post-training techniques. MindLink demonstrates strong performance across various common benchmarks and is widely applicable in diverse AI scenarios. We welcome feedback to help us continuously optimize and improve our models.
### π Model Downloads
| **π€ Model** | **π Context Length** | **β¬οΈ Download** |
| :---: | :---: | :---: |
| **MindLink 32B** | `128K` | [π€ **HuggingFace**](https://huggingface.co/Skywork/MindLink-32B-0801) |
| **MindLink 72B** | `128K` | [π€ **HuggingFace**](https://huggingface.co/Skywork/MindLink-72B-0801) |
### π Technical Report
Our training methodology and evaluation: [MindLink](https://github.com/SkyworkAI/MindLink/blob/main/mindlink.pdf)
---
## Highlights
* **Plan-based Reasoning**: Without the "think" tag, MindLink achieves competitive performance with leading proprietary models across a wide range of reasoning and general tasks. It significantly reduces inference cost, and improves multi-turn capabilities.
* **Mathematical Framework**: It analyzes the effectiveness of both **Chain-of-Thought (CoT)** and **Plan-based Reasoning**.
* **Adaptive Reasoning**: it automatically adapts its reasoning strategy based on task complexity: complex tasks produce detailed reasoning traces, while simpler tasks yield concise outputs.
---
## Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
> β οΈ Please make sure you have installed `transformers>=4.51.0`. Lower versions are not supported.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Skywork/MindLink-72B-0801"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What is the capital of China?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For deployment, you can use sglang>=0.4.9.post1 to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Skywork/MindLink-72B-0801 --tp 2
```
---
## API Access
π’ We provide developers with a **one-month free trial** of our API for exploring and testing our models. To request access to an **Open WebUI account** (https://sd1svahsfo0m61h76e190.apigateway-cn-beijing.volceapi.com), please contact us at: **[mindlink@skywork.ai](mailto:mindlink@skywork.ai)**
β οΈ Note: If you encounter inconsistent responses during inference, we recommend clearing the session context (history) and retrying.
### π§ Usage Instructions
Our Chat API supports OpenAI's format. Simply include your API Key with HTTP POST requests.
#### β
Sample Request using `curl`:
```bash
curl -X POST https://sd2690u280c6ft26qcdi0.apigateway-cn-beijing.volceapi.com/v1/chat/completions \
-H "Authorization: Bearer nc6Dt7DrLJNzLELiqOR1bogO5Oh1qHtO" \
-H "Content-Type: application/json" \
-d '{
"model": "Mind_Link_beta_32B",
"messages": [
{"role": "user", "content": "What is the capital of China?"}
],
"temperature": 0.7,
"max_tokens": 128,
"stream": false
}'
```
#### π Sample Request using Python:
```python
import requests
API_KEY = "nc6Dt7DrLJNzLELiqOR1bogO5Oh1qHtO"
API_URL = "https://sd2690u280c6ft26qcdi0.apigateway-cn-beijing.volceapi.com/v1/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": "Mind_Link_beta_32B",
"messages": [
{"role": "user", "content": "What is the capital of China?"}
],
"temperature": 0.7,
"max_tokens": 128,
"stream": False
}
response = requests.post(API_URL, headers=headers, json=payload)
if response.status_code == 200:
reply = response.json()
print("MindLink Response:")
print(reply["choices"][0]["message"]["content"])
else:
print(f"Error {response.status_code}: {response.text}")
```
---
### π API Interface Details
* **Endpoint**: `https://sd2690u280c6ft26qcdi0.apigateway-cn-beijing.volceapi.com/v1/chat/completions`
* **Authentication**: Use your API key via `Authorization: Bearer `
* **Request Format**: Compatible with OpenAI's Chat Completion API
* **Supported Fields**: `model`, `messages`, `temperature`, `top_p`, `max_tokens`, `stream`, `stop`, etc.
* **Model Identifiers**: Use either `"Mind_Link_beta_32B"` or `"Mind_Link_beta_72B"`
* **Public API Key**: We provide the following public API key: `"nc6Dt7DrLJNzLELiqOR1bogO5Oh1qHtO"` (requests via this key enter a queue and have limited request rates; contact us for unlimited access).
---
## Evaluation
The results are shown below:

---
## License and Usage Information
### Model License and Terms of Use
#### 1. Core License
This model is licensed under the **Apache License 2.0**, granting users the following rights:
β
Commercial deployment
β
Source code modification
β
Patent authorization
β
Closed-source derivatives
β οΈ Prohibition on using model names/logos for promotion without written authorization
β οΈ No warranties provided
#### 2. Inheritance Declaration
This model is based on improvements from **Qwen** (Apache 2.0 License). You must:
* Retain original Qwen copyright notices in derivative works.
* Clearly document changes made in modification notes.
* Adhere to any additional usage restrictions imposed by Qwen.
If you have any questions, please raise an issue or contact us at mindlink@skywork.ai.
---