Qwen2.5-1.5B-Instruct-Lora-Deepseek-R1
This model is a LoRA (Low-Rank Adaptation) fine-tuned version of Qwen2.5-1.5B-Instruct, specifically fine-tuned on the DeepSeek-R1-Distill dataset. LoRA was applied to the query (q
), key (k
), and value (v
) projection matrices.
Base Model: Qwen2.5-1.5B-Instruct
Dataset: tuanha1305/DeepSeek-R1-Distill
Training Details
- Hardware: 1 × NVIDIA A100 GPU (80GB HBM)
- Training Time: ~7 hours and 17 minutes
- Total Steps: 9000
- Fine-tuning Method: LoRA (q, k, v)
Evaluation on MATH-500 Benchmark
After following the sampling-based Pass@1 methodology inspired by DeepSeek R1, we have
Parameter | Value |
---|---|
dataset | HuggingFaceH4/MATH-500 |
temperature | 0.6 |
top_p | 0.95 |
max_new_tokens | 2048 |
Num_samples | 16 per question |
Pass@1: 54.60% (273 out of 500 questions)
This metric represents the percentage of questions with at least one correct solution among multiple generated attempts.
How to Use
Example Python Script
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "PursuitOfDataScience/Qwen2.5-1.5B-Instruct-Lora-Deepseek-R1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
test_prompt = "Instruction: Explain how machine learning works\nResponse:"
inputs = tokenizer(test_prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_length=200,
temperature=0.7,
top_p=0.95,
do_sample=True
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"\nGenerated response:\n{generated_text}")
Sample Output
Instruction: Explain how machine learning works
Response: Machine learning is a subset of artificial intelligence that allows computers to learn from data without being explicitly programmed. It involves using algorithms and statistical models to analyze patterns, trends, or relationships in large sets of data and then making predictions or decisions based on these insights.
Here's an overview of the key steps involved in implementing a machine learning model:
1. Data collection: Gather historical data relevant to your problem domain.
2. Data preprocessing: Cleanse, normalize, and transform raw data into a format suitable for analysis.
3. Feature selection: Identify important features (variables) that can help predict outcomes.
4. Model training: Train various machine learning algorithms on subsets of labeled data.
5. Model evaluation: Assess performance metrics like accuracy, precision, recall, etc., using test datasets.
6. Model tuning: Optimize hyperparameters and tweak algorithm settings to improve predictive power.
7. Deployment: Implement trained models in production systems for real-time predictions.
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.