File size: 4,421 Bytes
0eaef0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a3e778d
fc013a7
 
20a653f
 
fc013a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: apache-2.0
datasets:
- simplescaling/aime24_figures
- amphora/QwQ-LongCoT-130K
- HuggingFaceH4/MATH-500
- RyotaKadoya1993/math-5000-nemotron-v2
language:
- en
base_model:
- Qwen/Qwen2-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- thinker
- math
---

![aaa.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/RFgIBf5f3bpPiO1g7-pLc.png)

# **Open-Xi-Math-Preview**

> **Open-Xi-Math-Preview** is a **mathematics-focused reasoning model** fine-tuned on **Qwen2-1.5B-Instruct**, utilizing a **modular dataset** designed for enhancing **mathematical thinking**. It provides robust capabilities in symbolic reasoning, structured deduction, and compact coding — optimized for edge deployment on **resource-constrained devices**.

## **Key Improvements**

1. **Mathematical Reasoning via Modular Data**:
   Fine-tuned on diverse and structured math-focused datasets to handle problem-solving, symbolic computation, and multi-step derivations with efficiency on low-power devices.

2. **Compact Coding & Math Assistant**:
   Understands multiple programming languages and math representations (e.g., LaTeX, symbolic algebra). Ideal for math-enhanced embedded coding and problem-solving environments.

3. **Error Detection in Structured Data**:
   Accurately detects and corrects logical errors, malformed math expressions, and data structures (e.g., JSON, XML, LaTeX), all while maintaining low inference latency.

4. **Instruction Following for Problem-Solving**:
   Enhanced with strong instruction-following performance, particularly for step-wise solutions in math word problems, logic puzzles, and equation derivations.

5. **Extended Context Support**:
   Supports **128K token inputs** and **8K token outputs**, enabling it to work with long math chains-of-thought and proofs, while remaining lightweight enough for edge inference.

## **Quickstart with Transformers**

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "your-username/Open-Xi-Math-Preview"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve the equation: 2x^2 - 4x - 6 = 0. Show all steps."
messages = [
    {"role": "system", "content": "You are a helpful and concise mathematical reasoning assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## **Intended Use**

1. **Math-Centric Edge Applications**:
   Designed for embedded AI systems in calculators, educational tools, and mobile math tutoring.

2. **Advanced Math Reasoning**:
   Effective for solving algebra, geometry, calculus, and competition math problems using logical derivation.

3. **Educational & Instructional Aids**:
   Useful for step-by-step teaching in math-heavy domains like STEM education, coding classes, and robotics kits.

4. **Low-Latency Math Agents**:
   Deployable in customer support bots, interactive kiosks, and STEM-based IoT systems for fast math-based interactions.

5. **Structured Output Generation**:
   Generates LaTeX, JSON, or tabular formats for math answers and reasoning in structured pipelines.

## **Limitations**

1. **Edge Hardware Still Required**:
   Though lightweight, best used with devices equipped with NPUs, GPUs, or optimized ML accelerators.

2. **No Internet or Real-Time Info**:
   Static knowledge cutoff; cannot retrieve or interact with live external data sources.

3. **Not Suited for Creative Tasks**:
   Focused on deterministic reasoning — not built for abstract, poetic, or generative creative writing.

4. **Prompt Sensitivity**:
   Clear, structured prompts yield more accurate reasoning; ambiguous questions may degrade output quality.

5. **Potential Dataset Biases**:
   Model may carry forward biases or inconsistencies present in the training datasets; vet outputs in critical settings.