• Parameter-efficient, instruction-fine-tuned phi-4-mini model.

  • Uses LoRA for fine-tuning.

  • Trained on LinkedIn posts from various themes.

  • Training details:

    • Training size: 2643
    • Quantization: 8-bit
    • Optimizer: AdamW
    • Learning Rate: 1e-4
    • Epochs: 1
    • Train Batch size: 1
    • Eval Batch size: 4
    • Gradient accumulation steps: 8
    • Sequence length: 412
  • LoRA configs:

    • Rank: 16
    • Alpha: 16
    • Dropout: 0.05
Downloads last month
8
Safetensors
Model size
3.84B params
Tensor type
F32
BF16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for SkyR/linkedin-8bit-phi4

Quantized
(91)
this model