Phi-3 Mini 4K AVRO Fine-tuned Model (Ollama)

This is a fine-tuned version of Microsoft's Phi-3 Mini 4K model, specifically trained on AVRO-related tasks and exported for use with Ollama.

Model Details

  • Base Model: Microsoft Phi-3 Mini 4K
  • Fine-tuning: LoRA with rank 32, alpha 64, 20 epochs
  • Export Format: GGUF (Quantized q4_k_m)
  • Export Date: 2025-09-15
  • Export Tool: Docker-based Ollama export
  • Model Size: ~7.2GB (quantized)

Files

  • model.gguf: The quantized model file in GGUF format
  • Modelfile: Ollama configuration file with model parameters
  • docker-compose.yml: Docker setup for running the model
  • setup_ollama.sh: Script to set up Ollama with this model
  • test_model.sh: Script to test the model functionality

Usage

With Ollama

  1. Download the model files

  2. Run the setup script:

    chmod +x setup_ollama.sh
    ./setup_ollama.sh
    
  3. Use the model:

    ollama run phi3-avro
    

With Docker Compose

docker-compose up -d

Model Parameters

  • Temperature: 0.7
  • Top-p: 0.9
  • Top-k: 40
  • Max tokens: 2048

Fine-tuning Details

This model was fine-tuned using LoRA (Low-Rank Adaptation) technique:

  • Rank: 32
  • Alpha: 64
  • Training epochs: 20
  • Training completed: 2025-09-14

The model has been specifically trained to understand and work with AVRO schemas, data serialization, and related data engineering tasks.

License

This model inherits the license from the base Phi-3 Mini model. Please refer to Microsoft's Phi-3 licensing terms.

Technical Specifications

  • Quantization: q4_k_m (4-bit quantization with k-means)
  • Context length: 4096 tokens
  • Export method: Docker container compilation
  • Compatible with: Ollama, llama.cpp ecosystem
Downloads last month
41
GGUF
Model size
3.82B params
Architecture
phi3
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for oriolrius/phi3-avro-gguf-q4km

Quantized
(139)
this model