Update README.md
Browse files
README.md
CHANGED
@@ -1,202 +1,203 @@
|
|
1 |
---
|
2 |
base_model: meta-llama/Llama-3.2-1B
|
3 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
11 |
|
12 |
-
|
|
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
|
|
17 |
|
|
|
|
|
18 |
|
|
|
|
|
19 |
|
20 |
-
- **
|
21 |
-
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
29 |
|
30 |
-
|
|
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
-
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
|
|
|
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
- **
|
|
|
96 |
|
97 |
-
|
|
|
98 |
|
99 |
-
|
|
|
100 |
|
101 |
-
|
102 |
|
103 |
-
##
|
104 |
|
105 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
112 |
|
113 |
-
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
|
119 |
-
|
|
|
|
|
120 |
|
121 |
-
|
|
|
|
|
122 |
|
123 |
-
|
|
|
124 |
|
125 |
-
|
|
|
|
|
|
|
|
|
|
|
126 |
|
127 |
-
|
128 |
|
129 |
-
|
130 |
|
131 |
-
|
132 |
|
|
|
|
|
133 |
|
|
|
|
|
134 |
|
135 |
-
|
|
|
136 |
|
137 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
-
|
|
|
140 |
|
141 |
-
|
142 |
|
143 |
-
|
144 |
|
145 |
-
|
|
|
146 |
|
147 |
-
- **
|
148 |
-
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
|
|
|
154 |
|
155 |
-
|
|
|
156 |
|
157 |
-
|
158 |
|
159 |
-
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
|
|
164 |
|
165 |
-
|
|
|
166 |
|
167 |
-
|
|
|
168 |
|
169 |
-
|
|
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
**APA:**
|
180 |
|
181 |
-
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
##
|
194 |
|
195 |
-
|
|
|
196 |
|
197 |
-
|
|
|
198 |
|
199 |
-
|
200 |
-
|
|
|
|
|
201 |
|
202 |
-
|
|
|
1 |
---
|
2 |
base_model: meta-llama/Llama-3.2-1B
|
3 |
library_name: peft
|
4 |
+
license: mit
|
5 |
+
datasets:
|
6 |
+
- dair-ai/emotion
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
metrics:
|
10 |
+
- accuracy
|
11 |
---
|
12 |
|
|
|
13 |
|
14 |
+
# **Model Card for LLaMA-3-2-LoRA-EmotionTune**
|
15 |
|
16 |
+
## **Model Details**
|
17 |
|
18 |
+
- **Model Description:**
|
19 |
+
LLaMA-3-2-LoRA-EmotionTune is a causal language model fine-tuned using Low-Rank Adaptation (LoRA) on a curated emotion dataset. The dataset consists of user-generated text annotated with emotion labels (sadness, joy, love, anger, fear, or surprise). This fine-tuning enables the model to perform efficient emotion classification while preserving the core strengths of the base LLaMA-3.2-1B-Instruct model.
|
20 |
|
21 |
+
- **Developed by:**
|
22 |
+
Taha Majlesi
|
23 |
|
24 |
+
- **Funded by (optional):**
|
25 |
+
tahamajs
|
26 |
|
27 |
+
- **Shared by (optional):**
|
28 |
+
tahamajs
|
29 |
|
30 |
+
- **Model type:**
|
31 |
+
Causal Language Model with LoRA-based fine-tuning for emotion classification
|
32 |
|
33 |
+
- **Language(s) (NLP):**
|
34 |
+
English
|
35 |
|
36 |
+
- **License:**
|
37 |
+
[Choose a license, e.g., Apache-2.0, MIT]
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
- **Finetuned from model (optional):**
|
40 |
+
LLaMA-3.2-1B-Instruct
|
41 |
|
42 |
+
- **Model Sources (optional):**
|
43 |
+
Original LLaMA model and publicly available emotion datasets
|
44 |
|
45 |
+
- **Repository:**
|
46 |
+
[https://huggingface.co/your-username/LLaMA-3-2-LoRA-EmotionTune](https://huggingface.co/your-username/LLaMA-3-2-LoRA-EmotionTune)
|
|
|
47 |
|
48 |
+
- **Paper (optional):**
|
49 |
+
For LoRA: *LoRA: Low-Rank Adaptation of Large Language Models* (Hu et al., 2021)
|
50 |
+
For LLaMA: [Reference paper details if available]
|
51 |
|
52 |
+
- **Demo (optional):**
|
53 |
+
[Link to interactive demo if available]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
+
---
|
56 |
|
57 |
+
## **Uses**
|
58 |
|
59 |
+
- **Direct Use:**
|
60 |
+
Emotion classification and sentiment analysis on short text inputs.
|
61 |
|
62 |
+
- **Downstream Use (optional):**
|
63 |
+
Can be integrated into affective computing systems, chatbots, content moderation pipelines, or any application requiring real-time sentiment detection.
|
64 |
|
65 |
+
- **Out-of-Scope Use:**
|
66 |
+
Not recommended for critical decision-making systems (e.g., mental health diagnostics) or for applications in languages other than English without further adaptation.
|
67 |
|
68 |
+
---
|
69 |
|
70 |
+
## **Bias, Risks, and Limitations**
|
71 |
|
72 |
+
- **Bias and Risks:**
|
73 |
+
The model may inherit biases present in the training data, potentially misclassifying nuanced emotions or reflecting cultural biases in emotional expression.
|
74 |
+
|
75 |
+
- **Limitations:**
|
76 |
+
- Limited to six predefined emotion categories.
|
77 |
+
- Performance may degrade for longer texts or in ambiguous contexts.
|
78 |
+
- The model is fine-tuned on a specific emotion dataset and may not generalize well across all domains.
|
79 |
|
80 |
+
---
|
81 |
|
82 |
+
## **Recommendations**
|
83 |
|
84 |
+
Users (both direct and downstream) should be aware of the model’s inherent biases and limitations. We recommend additional validation and fine-tuning before deploying this model in sensitive or high-stakes environments.
|
85 |
|
86 |
+
---
|
87 |
|
88 |
+
## **How to Get Started with the Model**
|
89 |
|
90 |
+
To load and use the model with Hugging Face Transformers and the PEFT library, try the following code snippet:
|
91 |
|
92 |
+
```python
|
93 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
94 |
+
from peft import PeftModel
|
95 |
|
96 |
+
# Load base model
|
97 |
+
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/LLaMA-3.2-1B-Instruct")
|
98 |
+
tokenizer = AutoTokenizer.from_pretrained("meta-llama/LLaMA-3.2-1B-Instruct")
|
99 |
|
100 |
+
# Load fine-tuned LoRA adapter
|
101 |
+
model = PeftModel.from_pretrained(base_model, "your-username/LLaMA-3-2-LoRA-EmotionTune")
|
102 |
|
103 |
+
# Example usage:
|
104 |
+
input_text = "I feel so happy today!"
|
105 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
106 |
+
outputs = model.generate(**inputs, max_new_tokens=50)
|
107 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
108 |
+
```
|
109 |
|
110 |
+
> **Demo:** An interactive demo for LLaMA-3-2-LoRA-EmotionTune is available on Hugging Face Spaces at [https://huggingface.co/spaces/your-username/demo-name](https://huggingface.co/spaces/your-username/demo-name).
|
111 |
|
112 |
+
---
|
113 |
|
114 |
+
## **Training Details**
|
115 |
|
116 |
+
- **Training Data:**
|
117 |
+
A curated emotion dataset consisting of user-generated text annotated with emotion labels (sadness, joy, love, anger, fear, surprise).
|
118 |
|
119 |
+
- **Training Procedure:**
|
120 |
+
Fine-tuning was performed on the LLaMA-3.2-1B-Instruct model using the LoRA method, which adapts selected attention layers using a low-rank approach. This method leverages a small subset of trainable parameters while keeping the majority of the model frozen.
|
121 |
|
122 |
+
- **Preprocessing (optional):**
|
123 |
+
Text normalization, tokenization using the Hugging Face tokenizer, and train/validation splitting.
|
124 |
|
125 |
+
- **Training Hyperparameters:**
|
126 |
+
- **LoRA rank (r):** 16
|
127 |
+
- **lora_alpha:** 32
|
128 |
+
- **lora_dropout:** 0.1
|
129 |
+
- **Learning rate:** 2e-5
|
130 |
+
- **Batch size:** 32
|
131 |
+
- **Epochs:** Early stopping applied around epoch 10 to prevent overfitting.
|
132 |
|
133 |
+
- **Speeds, Sizes, Times (optional):**
|
134 |
+
Training conducted on an NVIDIA Tesla T4 GPU for approximately 10–12 hours.
|
135 |
|
136 |
+
---
|
137 |
|
138 |
+
## **Evaluation**
|
139 |
|
140 |
+
- **Testing Data:**
|
141 |
+
A held-out subset of the emotion-annotated dataset (e.g., 100 samples).
|
142 |
|
143 |
+
- **Factors:**
|
144 |
+
The evaluation focused on the model’s ability to classify emotions within short text outputs.
|
|
|
|
|
|
|
145 |
|
146 |
+
- **Metrics:**
|
147 |
+
Accuracy and Micro F1 score.
|
148 |
|
149 |
+
- **Results:**
|
150 |
+
The fine-tuned model achieved an accuracy and Micro F1 score of approximately 31% on short-text generation tasks (5–100 token outputs), outperforming the base and instruction-tuned models.
|
151 |
|
152 |
+
---
|
153 |
|
|
|
154 |
|
155 |
+
## **Technical Specifications **
|
156 |
|
157 |
+
- **Model Architecture and Objective:**
|
158 |
+
Based on LLaMA-3.2-1B-Instruct, the model is fine-tuned using LoRA to specifically classify text into emotion categories.
|
159 |
|
160 |
+
- **Compute Infrastructure:**
|
161 |
+
Hugging Face Transformers, PEFT library, and PyTorch.
|
162 |
|
163 |
+
- **Hardware:**
|
164 |
+
NVIDIA Tesla T4 or equivalent GPU.
|
165 |
|
166 |
+
- **Software:**
|
167 |
+
Python, PyTorch, Hugging Face Transformers, PEFT 0.14.0.
|
168 |
|
169 |
+
---
|
170 |
|
171 |
+
## **Citation**
|
172 |
|
173 |
**BibTeX:**
|
174 |
|
175 |
+
```bibtex
|
176 |
+
@inproceedings{hu2021lora,
|
177 |
+
title={LoRA: Low-Rank Adaptation of Large Language Models},
|
178 |
+
author={Hu, Edward J and others},
|
179 |
+
booktitle={Proceedings of ICLR},
|
180 |
+
year={2021}
|
181 |
+
}
|
182 |
+
```
|
183 |
|
184 |
**APA:**
|
185 |
|
186 |
+
Hu, E. J., et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models. In *Proceedings of ICLR*.
|
|
|
|
|
187 |
|
188 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
189 |
|
190 |
+
## **Additional Information **
|
191 |
|
192 |
+
- **Model Card Authors:**
|
193 |
+
Taha Majlesi
|
194 |
|
195 |
+
- **Model Card Contact:**
|
196 | |
197 |
|
198 |
+
- **Framework Versions:**
|
199 |
+
- PEFT: 0.14.0
|
200 |
+
- Transformers: [version]
|
201 |
+
- PyTorch: [version]
|
202 |
|
203 |
+
---
|