---
license: mit
datasets:
- gretelai/synthetic_text_to_sql
base_model:
- eagle0504/openai-gsm8k-codealpaca-20k-enhanced-deepseek-r1-distill-qwen-1.5b
library_name: transformers
---
# ๐ง eagle0504/qwen-distilled-scout-1.5b-instruct-gen1
This model is a fine-tuned version of [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B), enhanced with instruction-tuned chain-of-thought (CoT) reasoning across three problem domains: **math**, **text-to-SQL**, and **Python programming**.
Fine-tuning was conducted using DeepSpeed on a multi-A100 GPU setup via RunPod for efficient training in memory-constrained environments. The training dataset includes CoT-formatted tasks with natural language questions and structured reasoning paths.
Inference notebook is publicly available [here](https://colab.research.google.com/drive/1frVD8iv3T0YhCIymKBLPhqO9d6ncGb8R?usp=sharing).
---
## ๐ Model Details
* **Base Model:** [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
* **Language:** English
* **Architecture:** Causal Language Model (Decoder-only)
* **Tokenizer:** AutoTokenizer from base model
* **Parameter Count:** 1.5 Billion
* **Training Framework:** ๐งข Transformers + DeepSpeed
* **Compute Environment:** RunPod (6x A100 SXM, 192 vCPU, 1.5TB RAM)
---
## ๐งช Training Dataset
**Datasets Used:**
* [`gretelai/synthetic_text_to_sql`](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
* [`eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1`](https://huggingface.co/datasets/eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1)
* [`eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1`](https://huggingface.co/datasets/eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1)
Each example in the dataset follows the structure:
```xml
This is a [math/SQL/Python] problem.
...
...
...
```
This instruction format ensures that the model understands the task type explicitly and applies step-by-step reasoning across all domains.
---
## ๐ Fine-Tuning Summary
The base model [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) was fine-tuned on three different datasets using DeepSpeed across various RunPod infrastructure setups. Below is a consolidated summary of the training configurations and results:
| Model ID | Dataset Description | GPUs | vCPUs | RAM (GB) | Disk per GPU | Container Image | Duration | Cost | Total Cost | DeepSpeed Stage | Precision | Mean Token Accuracy |
| ------------------------------------------------------------------------------- | ------------------------------- | ------------- | ----- | -------- | ------------ | ---------------------------------------------------------- | -------- | ------- | ----------- | --------------- | --------- | ------------------- |
| `eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1` | OpenAI GSM8K Enhanced v2 | 6 ร H100 PCIe | 144 | 1132 | 20 GB | `runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04` | 3 hrs | ~$14 | ~$42 | Stage 1 | FP16 | 98% |
| `eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1` | GSM8K + CodeAlpaca-20K Enhanced | 4 ร A100 SXM | 146 | 1144 | 20 GB | `runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04` | 3 hrs | ~$7+ | ~$21+ | Stage 1 | FP16 | 98% |
| `gretelai/synthetic_text_to_sql` | Custom CoT + SQL-Reasoning | 6 ร A100 SXM | 192 | 1536 | 20 GB | `runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04` | 2.5 hrs | ~$21 | ~$52.5 | Stage 2 | FP16 | 97% |
---
## ๐๏ธ Training Configuration
Training was performed with the following configuration:
* **Batch Size:** 2 (with gradient accumulation steps = 4)
* **Epochs:** 15
* **Max Length:** 1024 tokens
* **Optimizer:** AdamW
* **Learning Rate:** 5e-5 (with warmup + linear decay)
* **Precision:** FP16
* **DeepSpeed Config:**
* Zero Redundancy Optimizer Stage 2
* Gradient Clipping: 1.0
* AllGather + ReduceScatter optimization
* **Checkpoint Saving:** Disabled to minimize disk usage
---
## ๐งถ Evaluation Metric
The model is evaluated with a custom token-level accuracy metric:
* **Metric:** Mean token-level accuracy
* **Definition:** Accuracy over all non-masked tokens (`labels != -100`)
* **Implementation:** NumPy-based vectorized comparison between predicted tokens and ground truth
---
## ๐ Use Case
This model is tuned for **instruction-driven chain-of-thought generation**, and is especially useful in:
* Educational tools for logical reasoning and coding
* Auto SQL and code generation for tabular or structured systems
* Teaching agents in math, database, and programming domains
* Conversational agents requiring task-specific structured outputs
---
## ๐ฆ How to Use
```python
from transformers import StoppingCriteria, StoppingCriteriaList
import torch
class StopOnTokens(StoppingCriteria):
def __init__(self, stop_token_ids: list):
super().__init__()
self.stop_token_ids = stop_token_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
return any(input_ids[0, -len(token):].tolist() == token for token in self.stop_token_ids)
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("eagle0504/qwen-distilled-scout-1.5b-instruct-gen1")
tokenizer = AutoTokenizer.from_pretrained("eagle0504/qwen-distilled-scout-1.5b-instruct-gen1")
stop_sequence = ""
stop_ids = tokenizer.encode(stop_sequence, add_special_tokens=False)
stopping_criteria = StoppingCriteriaList([StopOnTokens([stop_ids])])
prompt = (
"This is a math problem."
"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether?"
)
inputs = tokenizer(
prompt,
return_tensors="pt"
)
outputs = model.generate(
**inputs,
max_new_tokens=1024, # use max token limit and this may not be needed because stop word is set up above
stopping_criteria=stopping_criteria # stop word is in place so we may not need all 1024 tokens
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## ๐ Limitations
* The model is specialized for instruction-following tasks in math, SQL, and Python reasoning. It may require further fine-tuning to generalize to open-domain dialogue or creative generation.
* Input length is capped at 1024 tokens, beyond which content will be truncated.
---
## ๐งโ๐ป Author
* **Name:** Yiqiao Yin
* **Hugging Face:** [eagle0504](https://huggingface.co/eagle0504)
* **Organization:** \[WYN AI / Independent AI Researcher]
---
## ๐ Citation
```bibtex
@misc{yin2025instructgen1,
title={Instruction-Tuned Qwen 1.5B Fine-tuned on Math + SQL + Python CoT Tasks},
author={Yiqiao Yin},
year={2025},
howpublished={\url{https://huggingface.co/eagle0504/qwen-distilled-scout-1.5b-instruct-gen1}},
}
```