PEFT
Collection
4 items
•
Updated
•
1
A base model OuteAI/Lite-Oute-1-300M-Instruct was fine-tuned on a tweet sentiment dataset cardiffnlp/tweet_eval in order to determine tweets tonality by positive, neutral or negative.
We used a system prompt to instruct the model:
SYSTEM PROMPT:
You are a tweet sentiment classifier. For each tweet input, analyze its sentiment and output exactly one word: "negative", "neutral", or "positive". Do not include any extra text.
But the model is not trained to return only the sentiment name.
Different from the previous task, the PEFT method changed to DoRA, which not simply learn a low rank matrix, it trains the magnitute vector and the directional matrix separately. By replacing the k_proj and v_proj layers as we did in the previos task to modify the model.
batch_size=16 rank = 8 alpha = 16 lr = 3e-5
The model achieved 0.34 macro f1-score on the test dataset(initial model 0.06)
Base model
OuteAI/Lite-Oute-1-300M-Instruct