Cover Letter LLaMA 3.2 (LoRA-tuned)
This repository contains LoRA adapters for LLaMA 3.2 3B, fine-tuned specifically for generating professional cover letters. The model is optimized for creating personalized cover letters based on job descriptions and applicant profiles.
Model Details π
Base Model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit Type: LoRA adaptation Task: Cover Letter Generation Language: English License: Research only (following LLaMA license)
Training π¬
Dataset
The model was fine-tuned on the Cover Letter Dataset by ShashiVish, containing professional cover letters paired with job descriptions.
Training Configuration
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
warmup_steps=5,
max_steps=60,
learning_rate=2e-4,
fp16=not is_bfloat16_supported(),
bf16=is_bfloat16_supported(),
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
)
# LoRA Configuration
lora_config = {
"r": 16,
"target_modules": ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
"lora_alpha": 16,
"lora_dropout": 0,
"bias": "none",
"use_gradient_checkpointing": "unsloth"
}
Input Format
The model expects input in the following format:
Below is a job application context. Write a professional cover letter based on the provided information.
### Job Details:
Title: {job_title}
Preferred Qualifications: {preferred_quals}
Company: {company}
### Applicant Information:
Name: {applicant_name}
Past Experience: {past_exp}
Current Experience: {current_exp}
Skills: {skills}
Qualifications: {qualifications}
### Cover Letter:
Usage π»
Local Deployment with Ollama
# Create Modelfile
echo "FROM llama3.2:3b
ADAPTER /path/to/downloaded/lora/weights" > Modelfile
# Create custom model
ollama create coverletter-custom -f Modelfile
Associated Project π
This model is part of the Letter Llama project, which provides a Streamlit interface for easy cover letter generation.
Acknowledgments π
- ShashiVish for the cover letter dataset
- Unsloth for the efficient training framework
- Ollama for the local model serving solution
Citation π
@misc{cover-letter-llama,
author = {Atharva},
title = {Cover Letter LLaMA 3.2 (LoRA-tuned)},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/Atharva2099/cover-letter-llama-3.2-lora}}
}
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-generation models for unsloth library.