--- language: - en tags: - text-generation - transformers - finetuned - phi-4 - lora - causal-lm license: apache-2.0 datasets: custom model-index: - name: mibera-v1-merged results: [] --- # 🏆 `mibera-v1-merged` 🏆 🚀 **Fine-tuned model based on `microsoft/phi-4` using LoRA adapters** ## 🔹 Model Details - **Base Model**: `microsoft/phi-4` - **Fine-tuned on**: Custom dataset - **Architecture**: Transformer-based Causal LM - **LoRA Adapter Merging**: ✅ Yes - **Merged Model**: ✅ Ready for inference without adapters ## 📚 Training & Fine-tuning Details - **Training Method**: Fine-tuning with **LoRA (Low-Rank Adaptation)** - **LoRA Rank**: 32 - **Dataset**: Custom curated dataset (details not publicly available) - **Training Library**: 🤗 Hugging Face `transformers` + `peft` ## 🚀 How to Use the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "ivxxdegen/mibera-v1-merged" # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) # Load model model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") print("✅ Model loaded successfully!")