Phi-3 Mini ENI Specialist - AWS Multi-ENI Controller Expert
This model is a second-stage fine-tuned version of johnlam90/phi3-mini-4k-instruct-alpaca-lora specialized for AWS Multi-ENI Controller knowledge using a custom dataset of 3 expert-level instructions.
Model Architecture
microsoft/Phi-3-mini-4k-instruct (Base Model)
β
johnlam90/phi3-mini-4k-instruct-alpaca-lora (Stage 1: General Instruction Following)
β
johnlam90/phi3-mini-eni-specialist (Stage 2: AWS Multi-ENI Controller Specialist)
Model Details
- Base Model: microsoft/Phi-3-mini-4k-instruct
- Stage 1: Fine-tuned on tatsu-lab/alpaca (52,002 samples)
- Stage 2: Specialized on johnlam90/johnlam-eni-instructions (3 expert samples)
- Fine-tuning Method: LoRA (Low-Rank Adaptation) - Second Stage
- Training Duration: 5.6 seconds
- Final Training Loss: 1.0032
- Average Training Loss: 0.9823
Specialization Areas
This model is an expert in:
- AWS Multi-ENI Controller: Dynamic ENI provisioning and management
- 5G UPF Deployment: User Plane Function deployment strategies
- Network Technologies: SR-IOV vs DPDK comparisons
- Kubernetes Networking: ENI management in containerized environments
- Telco CNF: Cloud Native Functions for telecommunications
Training Configuration (Stage 2)
- LoRA Rank: 8
- LoRA Alpha: 16
- LoRA Dropout: 0.05
- Target Modules: qkv_proj, o_proj, gate_proj, up_proj, down_proj
- Learning Rate: 5e-6
- Batch Size: 1 (with gradient accumulation steps: 4)
- Epochs: 10 (6.67 effective epochs)
- Precision: bfloat16
- Gradient Checkpointing: Enabled
Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct", trust_remote_code=True)
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Load Stage 1 LoRA (Alpaca)
model = PeftModel.from_pretrained(base_model, "johnlam90/phi3-mini-4k-instruct-alpaca-lora")
# Load Stage 2 LoRA (ENI Specialist)
model = PeftModel.from_pretrained(model, "johnlam90/phi3-mini-eni-specialist", adapter_name="eni_specialist")
model.eval()
# Ask ENI-specific questions
prompt = "How does the AWS Multi-ENI Controller manage dynamic ENI provisioning?"
formatted_prompt = f'''### Instruction:
{prompt}
### Response:
'''
# Generate expert response
inputs = tokenizer(formatted_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=False,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:")[1].strip())
Example Outputs
Question: "How does the AWS Multi-ENI Controller manage dynamic ENI provisioning?"
ENI Specialist Response:
The AWS Multi-ENI Controller manages dynamic ENI provisioning by automatically creating and deleting ENIs as needed. It also manages the lifecycle of ENIs, including their creation, deletion, and termination. The controller also provides a centralized management interface for monitoring and managing ENIs.
Question: "What steps should be taken to deploy a 5G UPF using the AWS Multi-ENI Controller?"
ENI Specialist Response:
The steps to deploy a 5G UPF using the AWS Multi-ENI Controller are:
- Create a VPC with a public subnet and a private subnet.
- Create a NAT Gateway in the public subnet.
- Create an Elastic Network Interface (ENI) in the private subnet.
- Create a 5G UPF in the private subnet.
- Configure the 5G UPF to use the ENI.
- Configure the 5G UPF to use the NAT Gateway.
- Configure the 5G UPF to use the VPC.
- Configure the 5G UPF to use the AWS Multi-ENI Controller.
Performance
The model demonstrates excellent specialization with:
- β Expert-level knowledge of AWS Multi-ENI Controller
- β Detailed technical explanations
- β Step-by-step deployment guidance
- β Clear differentiation of networking technologies
- β Consistent instruction following
Training Methodology
This model showcases second-stage fine-tuning, where:
- A general instruction-following model is first created
- Domain-specific knowledge is then layered on top
- Small, high-quality datasets can create powerful specialists
- Multiple LoRA adapters can be stacked for cumulative learning
Use Cases
Perfect for:
- AWS networking consultations
- 5G infrastructure planning
- Kubernetes networking guidance
- Telco CNF deployment assistance
- Multi-ENI architecture design
License
This model is released under the MIT license, following the base model's licensing terms.
Citation
If you use this model, please cite:
@misc{phi3-mini-eni-specialist,
title={Phi-3 Mini ENI Specialist: AWS Multi-ENI Controller Expert},
author={johnlam90},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/johnlam90/phi3-mini-eni-specialist}
}
Model tree for johnlam90/phi3-mini-eni-specialist
Base model
microsoft/Phi-3-mini-4k-instruct