prithivMLmods commited on
Commit
07e0832
·
verified ·
1 Parent(s): 4b8ee4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -1
README.md CHANGED
@@ -1,4 +1,106 @@
1
  ---
2
  datasets:
3
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  datasets:
3
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen3-1.7B
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - text-generation-inference
13
+ - trl
14
+ - moe
15
+ - llama
16
+ ---
17
+
18
+ ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/6IACMTfvjkw6sQI7swljn.png)
19
+
20
+ # **Regulus-Qwen3-R1-Llama-Distill-1.7B**
21
+
22
+ > **Regulus-Qwen3-R1-Llama-Distill-1.7B** is a **distilled reasoning model** fine-tuned on **Qwen/Qwen3-1.7B** using **Magpie-Align/Magpie-Reasoning-V2-250K-CoT-DeepSeek-R1-Llama-70B**.
23
+ > The training leverages **distilled traces from DeepSeek-R1-Llama-70B**, transferring advanced reasoning patterns into a lightweight 1.7B parameter model.
24
+ > It is specialized for **chain-of-thought reasoning across code, math, and science**, optimized for efficiency and mid-resource deployment.
25
+
26
+ > \[!note]
27
+ > GGUF: [https://huggingface.co/prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B-GGUF](https://huggingface.co/prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B-GGUF)
28
+
29
+ ---
30
+
31
+ ## **Key Features**
32
+
33
+ 1. **Distilled Reasoning from Large-Scale Models**
34
+ Trained with **distilled traces from DeepSeek-R1-Llama-70B**, preserving structured **chain-of-thought reasoning** in a smaller, faster model.
35
+
36
+ 2. **Unified Code + Math + Science Reasoning**
37
+ Strong performance across computational logic, programming tasks, and scientific problem solving.
38
+
39
+ 3. **Structured Chain-of-Thought Generation**
40
+ Produces clear, step-by-step explanations for algorithms, equations, and symbolic tasks.
41
+
42
+ 4. **Optimized Lightweight Footprint**
43
+ Maintains reasoning depth while being deployable on **mid-range GPUs**, **offline clusters**, and **edge AI systems**.
44
+
45
+ 5. **Multi-Format Output Support**
46
+ Generates responses in **LaTeX**, **Markdown**, **JSON**, and **tabular formats** for technical and research workflows.
47
+
48
+ ---
49
+
50
+ ## **Quickstart with Transformers**
51
+
52
+ ```python
53
+ from transformers import AutoModelForCausalLM, AutoTokenizer
54
+
55
+ model_name = "prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B"
56
+
57
+ model = AutoModelForCausalLM.from_pretrained(
58
+ model_name,
59
+ torch_dtype="auto",
60
+ device_map="auto"
61
+ )
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
63
+
64
+ prompt = "Explain step by step how to solve a system of linear equations using Gaussian elimination."
65
+
66
+ messages = [
67
+ {"role": "system", "content": "You are a reasoning assistant skilled in math, code, and scientific logic."},
68
+ {"role": "user", "content": prompt}
69
+ ]
70
+
71
+ text = tokenizer.apply_chat_template(
72
+ messages,
73
+ tokenize=False,
74
+ add_generation_prompt=True
75
+ )
76
+
77
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
78
+
79
+ generated_ids = model.generate(
80
+ **model_inputs,
81
+ max_new_tokens=512
82
+ )
83
+ generated_ids = [
84
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
85
+ ]
86
+
87
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
88
+ print(response)
89
+ ```
90
+
91
+ ---
92
+
93
+ ## **Intended Use**
94
+
95
+ * **Math and algorithm tutoring** with clear reasoning steps
96
+ * **Code reasoning and synthesis** for debugging and algorithm design
97
+ * **Scientific problem solving** in physics, chemistry, and biology
98
+ * **Structured educational assistant** for step-by-step learning
99
+ * **Efficient deployment** where distilled reasoning fidelity is required
100
+
101
+ ## **Limitations**
102
+
103
+ * Derived from **distilled traces** – reasoning may simplify compared to full-scale teacher models
104
+ * Not tuned for general-purpose conversation or creative writing
105
+ * Context length limits multi-document or long-codebase reasoning
106
+ * Optimized for structured reasoning, not emotional or casual dialogue