base_model: - mistralai/Mistral-7B-Instruct-v0.3
Model Card for Model ID
This code fine-tunes Mistral-7B-Instruct 🧠 using the Salesforce/xlam-function-calling-60k dataset to improve its ability to generate accurate structured function calls. It loads the dataset 📂, dynamically removes unnecessary columns like "query" and "answers" for cleaner data, and splits it into 90% training and 10% test for evaluation. The preprocess() function structures data in JSON format 📝, enhancing the model’s reasoning through Chain-of-Thought (CoT) prompting. Special tokens like and are added to guide structured outputs 🔧. The model is further optimized with bnb_4bit quantization for reduced size (~4.5GB) and improved inference efficiency 🚀. The result is a powerful model that can handle complex API requests with improved accuracy and stability. 🔍
Model Details
This code implements a well-structured process for fine-tuning the Mistral-7B-Instruct model using the Salesforce/xlam-function-calling-60k dataset. The goal is to improve the model’s ability to:
✅ Accurately understand user queries ✅ Generate precise function calls in structured JSON format ✅ Leverage Chain-of-Thought (CoT) reasoning for step-by-step function generation
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [Ritvik Gaur]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
- Finetuned from model [optional]: [mistralai/Mistral-7B-Instruct-v0.3]
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
from transformers import AutoModelForCausalLM, AutoTokenizer import torch
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("Generated Output:\n", generated_text)
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
pip install transformers torch from transformers import AutoModelForCausalLM, AutoTokenizer import torch
Link copy and Paste from Ritvik's repo from huggingface
model_name = "ritvik77/FineTune_LoRA__AgentToolCall_Mistral-7B_Transformer"
Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
Model lOadning
model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, # Efficient for GPU device_map="auto" # Automatically distribute across GPU/CPU )
Set to evaluation mode
model.eval()
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
Hyperparameter Value Description Base Model mistralai/Mistral-7B-Instruct-v0.3 Foundation model for fine-tuning
Fine-Tuning Method LoRA (Low-Rank Adaptation) Efficiently trains only a small subset of parameters
LoRA Rank Dimension 128 Controls the size of trainable LoRA layers
LoRA Alpha 128 Scaling factor for LoRA layers
LoRA Dropout 0.1 Adds regularization to improve model generalization
Train Batch Size 2 Balanced for stable performance on A100 (40GB VRAM)
Eval Batch Size 2 Ensures consistent evaluation during training
Gradient Accumulation Steps 8 Maintains an effective batch size of 16
Learning Rate 2e-4 Optimized for stable convergence
Warmup Ratio 0.1 Gradual learning rate increase for smoother training
Weight Decay 0.1 Prevents overfitting by penalizing large weights
Max Gradient Norm 1.0 Limits gradient spikes for stable training
Number of Epochs 2 Balanced performance without overfitting
Learning Rate Scheduler Cosine Provides smoother convergence
Quantization bnb_4bit Reduces model size while preserving performance
Precision fp16 Optimized for modern GPUs like A100/4090
Gradient Checkpointing Enabled Reduces memory usage during backpropagation
Optimizer adamw_bnb_8bit Efficient optimizer for quantized models
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 94