Phi-3 Mini 4K AVRO Fine-tuned Model (Ollama)
This is a fine-tuned version of Microsoft's Phi-3 Mini 4K model, specifically trained on AVRO-related tasks and exported for use with Ollama.
Model Details
- Base Model: Microsoft Phi-3 Mini 4K
- Fine-tuning: LoRA with rank 32, alpha 64, 20 epochs
- Export Format: GGUF (Quantized q4_k_m)
- Export Date: 2025-09-15
- Export Tool: Docker-based Ollama export
- Model Size: ~7.2GB (quantized)
Files
model.gguf
: The quantized model file in GGUF formatModelfile
: Ollama configuration file with model parametersdocker-compose.yml
: Docker setup for running the modelsetup_ollama.sh
: Script to set up Ollama with this modeltest_model.sh
: Script to test the model functionality
Usage
With Ollama
Download the model files
Run the setup script:
chmod +x setup_ollama.sh ./setup_ollama.sh
Use the model:
ollama run phi3-avro
With Docker Compose
docker-compose up -d
Model Parameters
- Temperature: 0.7
- Top-p: 0.9
- Top-k: 40
- Max tokens: 2048
Fine-tuning Details
This model was fine-tuned using LoRA (Low-Rank Adaptation) technique:
- Rank: 32
- Alpha: 64
- Training epochs: 20
- Training completed: 2025-09-14
The model has been specifically trained to understand and work with AVRO schemas, data serialization, and related data engineering tasks.
License
This model inherits the license from the base Phi-3 Mini model. Please refer to Microsoft's Phi-3 licensing terms.
Technical Specifications
- Quantization: q4_k_m (4-bit quantization with k-means)
- Context length: 4096 tokens
- Export method: Docker container compilation
- Compatible with: Ollama, llama.cpp ecosystem
- Downloads last month
- 41
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Model tree for oriolrius/phi3-avro-gguf-q4km
Base model
microsoft/Phi-3-mini-4k-instruct