Model Card for llm-course-hw3-lora

This model is a fine-tuned version of OuteAI/Lite-Oute-1-300M-Instruct on the cardiffnlp/tweet_eval dataset to determine tweets tonality in one of the three classes: positive, neutral or negative.

It was finetuned with LoRA to make training more memory and time efficient. Low-rank finetuning was applied only to V and K matrices of attention layers.

Training procedure

This model was trained with batch_size=32, rank = 8, alpha = 16, learning_rate = 1e-5 on cardiffnlp/tweet_eval for one epoch.

The model achieved 0.48 f1-score on the test dataset.

Comparison

Before:

Chase Headley's RBI double in the 8th inning off David Price snapped a Yankees streak of 33 consecutive scoreless innings against Blue Jays -> "The text is positive. \n The text is neutral."

Correct: neutral

After:

Chase Headley's RBI double in the 8th inning off David Price snapped a Yankees streak of 33 consecutive scoreless innings against Blue Jays -> "neutral"

Usage

from safetensors.torch import load_file
from huggingface_hub import hf_hub_download


model = AutoModelForCausalLM.from_pretrained(f"{REPO_NAME}-lora", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"{REPO_NAME}-lora")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"

apply_peft_to_module(model, LinearWithLoRA, r=8, alpha=16, target_submodules=["v_proj", "k_proj"])
model.to(DEVICE)

path = hf_hub_download(f"{REPO_NAME}-lora", "model.safetensors")
state_dict = load_file(path)

model.load_state_dict(state_dict, strict=False)

LoRA_saved_model_accuracy = eval(model, dataset["test"], tokenizer)
print(f"Accuracy after LoRA training: {LoRA_saved_model_accuracy:.2f}")

Framework versions

  • Transformers: 4.47.0
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0
Downloads last month
4
Safetensors
Model size
300M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for xiryss/llm-course-hw3-lora

Finetuned
(32)
this model

Dataset used to train xiryss/llm-course-hw3-lora

Collection including xiryss/llm-course-hw3-lora