Model Card for llm-course-hw3-dora
This model is a fine-tuned version of OuteAI/Lite-Oute-1-300M-Instruct on the cardiffnlp/tweet_eval dataset to determine tweets tonality in one of the three classes: positive, neutral or negative.
It was finetuned with DoRA to make training more memory and time efficient. Low-rank finetuning was applied only to V and K matrices of attention layers.
Training procedure
This model was trained with batch_size=32, rank = 16, alpha = 32, learning_rate = 1e-5 on cardiffnlp/tweet_eval for three epochs.
The model achieved 0.53 f1-score on the test dataset.
Comparison
Before:
Sorry bout the stream last night I crashed out but will be on tonight for sure. Then back to Minecraft in pc tomorrow night. -> "positive"
Correct: neutral
After:
Sorry bout the stream last night I crashed out but will be on tonight for sure. Then back to Minecraft in pc tomorrow night. -> "neutral. GCSE English \n"
Although there are redundant tokens, the output is considered correct.
Usage
from safetensors.torch import load_file
from huggingface_hub import hf_hub_download
model = AutoModelForCausalLM.from_pretrained(f"{REPO_NAME}-dora", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"{REPO_NAME}-dora")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
apply_peft_to_module(model, LinearWithDoRA, r=16, alpha=32, target_submodules=["v_proj", "k_proj"])
model.to(DEVICE)
path = hf_hub_download(f"{REPO_NAME}-dora", "model.safetensors")
state_dict = load_file(path)
model.load_state_dict(state_dict, strict=False)
DoRA_saved_model_accuracy = eval(model, dataset["test"], tokenizer)
print(f"Accuracy after DoRA training: {DoRA_saved_model_accuracy:.2f}")
Framework versions
- Transformers: 4.47.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.3.1
- Tokenizers: 0.21.0
- Downloads last month
- 5
Model tree for xiryss/llm-course-hw3-dora
Base model
OuteAI/Lite-Oute-1-300M-Instruct