Phi-3 Mini ENI Specialist - AWS Multi-ENI Controller Expert

This model is a second-stage fine-tuned version of johnlam90/phi3-mini-4k-instruct-alpaca-lora specialized for AWS Multi-ENI Controller knowledge using a custom dataset of 3 expert-level instructions.

Model Architecture

microsoft/Phi-3-mini-4k-instruct (Base Model)
    ↓
johnlam90/phi3-mini-4k-instruct-alpaca-lora (Stage 1: General Instruction Following)
    ↓
johnlam90/phi3-mini-eni-specialist (Stage 2: AWS Multi-ENI Controller Specialist)

Model Details

  • Base Model: microsoft/Phi-3-mini-4k-instruct
  • Stage 1: Fine-tuned on tatsu-lab/alpaca (52,002 samples)
  • Stage 2: Specialized on johnlam90/johnlam-eni-instructions (3 expert samples)
  • Fine-tuning Method: LoRA (Low-Rank Adaptation) - Second Stage
  • Training Duration: 5.6 seconds
  • Final Training Loss: 1.0032
  • Average Training Loss: 0.9823

Specialization Areas

This model is an expert in:

  • AWS Multi-ENI Controller: Dynamic ENI provisioning and management
  • 5G UPF Deployment: User Plane Function deployment strategies
  • Network Technologies: SR-IOV vs DPDK comparisons
  • Kubernetes Networking: ENI management in containerized environments
  • Telco CNF: Cloud Native Functions for telecommunications

Training Configuration (Stage 2)

  • LoRA Rank: 8
  • LoRA Alpha: 16
  • LoRA Dropout: 0.05
  • Target Modules: qkv_proj, o_proj, gate_proj, up_proj, down_proj
  • Learning Rate: 5e-6
  • Batch Size: 1 (with gradient accumulation steps: 4)
  • Epochs: 10 (6.67 effective epochs)
  • Precision: bfloat16
  • Gradient Checkpointing: Enabled

Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct", trust_remote_code=True)

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/Phi-3-mini-4k-instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# Load Stage 1 LoRA (Alpaca)
model = PeftModel.from_pretrained(base_model, "johnlam90/phi3-mini-4k-instruct-alpaca-lora")

# Load Stage 2 LoRA (ENI Specialist)
model = PeftModel.from_pretrained(model, "johnlam90/phi3-mini-eni-specialist", adapter_name="eni_specialist")
model.eval()

# Ask ENI-specific questions
prompt = "How does the AWS Multi-ENI Controller manage dynamic ENI provisioning?"
formatted_prompt = f'''### Instruction:
{prompt}

### Response:
'''

# Generate expert response
inputs = tokenizer(formatted_prompt, return_tensors="pt")
with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=200,
        do_sample=False,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id
    )

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:")[1].strip())

Example Outputs

Question: "How does the AWS Multi-ENI Controller manage dynamic ENI provisioning?"

ENI Specialist Response:

The AWS Multi-ENI Controller manages dynamic ENI provisioning by automatically creating and deleting ENIs as needed. It also manages the lifecycle of ENIs, including their creation, deletion, and termination. The controller also provides a centralized management interface for monitoring and managing ENIs.

Question: "What steps should be taken to deploy a 5G UPF using the AWS Multi-ENI Controller?"

ENI Specialist Response:

The steps to deploy a 5G UPF using the AWS Multi-ENI Controller are:

  1. Create a VPC with a public subnet and a private subnet.
  2. Create a NAT Gateway in the public subnet.
  3. Create an Elastic Network Interface (ENI) in the private subnet.
  4. Create a 5G UPF in the private subnet.
  5. Configure the 5G UPF to use the ENI.
  6. Configure the 5G UPF to use the NAT Gateway.
  7. Configure the 5G UPF to use the VPC.
  8. Configure the 5G UPF to use the AWS Multi-ENI Controller.

Performance

The model demonstrates excellent specialization with:

  • βœ… Expert-level knowledge of AWS Multi-ENI Controller
  • βœ… Detailed technical explanations
  • βœ… Step-by-step deployment guidance
  • βœ… Clear differentiation of networking technologies
  • βœ… Consistent instruction following

Training Methodology

This model showcases second-stage fine-tuning, where:

  1. A general instruction-following model is first created
  2. Domain-specific knowledge is then layered on top
  3. Small, high-quality datasets can create powerful specialists
  4. Multiple LoRA adapters can be stacked for cumulative learning

Use Cases

Perfect for:

  • AWS networking consultations
  • 5G infrastructure planning
  • Kubernetes networking guidance
  • Telco CNF deployment assistance
  • Multi-ENI architecture design

License

This model is released under the MIT license, following the base model's licensing terms.

Citation

If you use this model, please cite:

@misc{phi3-mini-eni-specialist,
  title={Phi-3 Mini ENI Specialist: AWS Multi-ENI Controller Expert},
  author={johnlam90},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/johnlam90/phi3-mini-eni-specialist}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for johnlam90/phi3-mini-eni-specialist

Dataset used to train johnlam90/phi3-mini-eni-specialist