metadata
language:
- en
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
tags:
- professional
- linkedin
Parameter-efficient, instruction-fine-tuned phi-4-mini model.
Uses LoRA for fine-tuning.
Trained on LinkedIn posts from various themes.
Training details:
- Training size: 2643
- Quantization: 8-bit
- Optimizer: AdamW
- Learning Rate: 1e-4
- Epochs: 1
- Train Batch size: 1
- Eval Batch size: 4
- Gradient accumulation steps: 8
- Sequence length: 412
LoRA configs:
- Rank: 16
- Alpha: 16
- Dropout: 0.05