natalievgrafova commited on
Commit
fe6041d
·
verified ·
1 Parent(s): 96c6f63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -159
README.md CHANGED
@@ -3,200 +3,167 @@ base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
3
  library_name: peft
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
 
 
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
 
 
 
 
 
 
 
18
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
45
 
46
- ### Downstream Use [optional]
 
 
 
 
 
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
 
 
 
51
 
52
- ### Out-of-Scope Use
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
 
 
 
 
55
 
56
- [More Information Needed]
 
 
57
 
58
- ## Bias, Risks, and Limitations
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
61
 
62
- [More Information Needed]
 
63
 
64
- ### Recommendations
 
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
 
69
 
70
- ## How to Get Started with the Model
 
 
 
 
 
 
 
71
 
72
- Use the code below to get started with the model.
 
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
- ### Framework versions
201
-
202
- - PEFT 0.13.2
 
3
  library_name: peft
4
  ---
5
 
 
6
 
7
+ # Monolingual English Model Adapters for Neutral and Stance-Aware Definition Generation
8
 
9
+ These models are trained to generate neutral (noslang, wordnet, oxford) and biased (slang, all), stance-aware definitions.
10
 
11
+ These adapters should be used together with the base model *unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit* and requires *unsloth* installation.
12
 
 
13
 
14
+ The models are instruction-tuned on the dictionary data:
15
+ WordNet (Ishiwatari et al., 2019), Oxford (Gadet-
16
+ sky et al., 2018), Wiktionary (Mickus
17
+ et al., 2022), Urban Dic-
18
+ tionary (Ni and Wang, 2017).
19
 
 
20
 
21
+ | Model | Training data |
22
+ |-------------------|--------------------------|
23
+ | LT3/definitions-oxford-llama-8B-instruct | Oxford |
24
+ | LT3/definitions-all-noslang-llama-8B-instruct | WordNet, Wiki, Oxford |
25
+ | LT3/definitions-all-llama-8B-instruct | WordNet, Wiki, Oxford, Urban |
26
+ | LT3/definitions-wordnet-llama-8B-instruct |WordNet |
27
+ | LT3/definitions-slang-llama-8B-instruct | Urban |
28
 
29
 
 
 
 
 
 
 
 
30
 
 
31
 
32
+ ## How to use
33
 
34
+ The models expect a usage example and a keyword as input:
 
 
35
 
36
+ - keyword = "death penalty"
37
+ - example usage (argument) = "As long as death penalty is kept, this confirms that our society is founded on violence."
38
 
39
+ While we tested the models on the argumentative data to test their potential to produce stance-aware definitions, they can be used for a general definition generation task for various contexts.
40
+ If you want to generate neutral definitions, avoid using slang- and all- models, these models aim to capture contextual bias that reflects the attitude of the author (pro, contra) towards the keyword.
41
 
42
+ The following code can be used for the definition generation task:
43
 
 
44
 
45
+ ```python
46
+ import torch
47
+ from unsloth import FastLanguageModel
48
+ from transformers import AutoTokenizer
49
 
50
+ # Load model + adapter
51
+ BASE_MODEL = "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit"
52
+ #ADAPTER_PATH = "LT3/definitions-all-llama-8B-instruct"
53
+ #ADAPTER_PATH = "LT3/definitions-slang-llama-8B-instruct"
54
+ #ADAPTER_PATH = "LT3/definitions-oxford-llama-8B-instruct"
55
+ #ADAPTER_PATH = "LT3/definitions-all-noslang-llama-8B-instruct"
56
+ ADAPTER_PATH = "LT3/definitions-wordnet-llama-8B-instruct"
57
 
58
+ MAX_SEQ_LENGTH = 512
59
 
60
+ model, tokenizer = FastLanguageModel.from_pretrained(
61
+ model_name=BASE_MODEL,
62
+ max_seq_length=MAX_SEQ_LENGTH,
63
+ dtype="float16", # or "bfloat16" if needed
64
+ load_in_4bit=True
65
+ )
66
 
67
+ model.load_adapter(ADAPTER_PATH)
68
+ FastLanguageModel.for_inference(model)
69
 
70
+ # Prompt variants
71
+ PROMPTS = {
72
+ "definition_0": "What is the definition of {keyword} in the following text?",
73
+ # "definition_1": "What is the contextual definition of {keyword} in the following text?",
74
+ # "definition_2": "In what sense is the {keyword} used in the following text?",
75
+ # "definition_3": "What is the persuasive definition of {keyword} in the following text?",
76
+ # "definition_4": "What is the emotionally charged definition of {keyword} in the following text?"
77
+ }
78
 
79
+ # Alpaca-style prompt formatting
80
+ def format_prompt(instruction, context):
81
+ return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
82
 
83
+ ### Instruction:
84
+ {instruction}
85
 
86
+ ### Input:
87
+ {context}
88
 
89
+ ### Response:
90
+ """
91
 
92
+ # Clean model output
93
+ def clean_response(response):
94
+ return response.split("### Response:")[-1].strip()
95
 
96
+ # Your example input
97
+ keyword = "death penalty"
98
+ argument = "As long as death penalty is kept, this confirms that our society is founded on violence."
99
 
100
+ # Generate for each prompt variant
101
+ print(f"\n Argument:\n{argument}\n")
102
+ for name, template in PROMPTS.items():
103
+ instruction = template.format(keyword=keyword)
104
+ prompt = format_prompt(instruction, argument)
105
 
106
+ inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
107
+ outputs = model.generate(
108
+ **inputs,
109
+ max_new_tokens=100,
110
+ do_sample=True,
111
+ temperature=0.7,
112
+ use_cache=True
113
+ )
114
 
115
+ decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
116
+ definition = clean_response(decoded)
117
 
118
+ print(f" {name}:\n{definition}\n")
119
 
 
120
 
121
+ ```
122
 
 
123
 
124
+ ## Model Performance
125
 
126
+ The best model for the general definition generation task is oxford-llama. The lower plausibility score for slang- and all- models means the definitions are biased. These models can be further used to explore the generation of
127
+ contextual definitions that capture stance-related bias (pro or contra the keyword).
128
 
129
+ | Model | BERTScoreF1 [%] | Plausibility [%] |
130
+ |-------------------|--------------------------|--------------|
131
+ | LT3/definitions-oxford-llama-8B-instruct | 88.2 | 84.5 |
132
+ | LT3/definitions-all-noslang-llama-8B-instruct | 86.0 | 79.8 |
133
+ | LT3/definitions-all-llama-8B-instruct | 86.5 | 53.25 |
134
+ | LT3/definitions-wordnet-llama-8B-instruct | 87.0 | 43.00 |
135
+ | LT3/definitions-slang-llama-8B-instruct | 86.8 | 37.25 |
136
+
137
+
138
+
139
+
140
+ ### BibTeX entry and citation info
141
+
142
+ If you would like to use or cite our paper or model, feel free to use the following BibTeX code:
143
+
144
+ ```bibtex
145
+ @inproceedings{evgrafova-etal-2025-stance,
146
+ title = "Stance-aware Definition Generation for Argumentative Texts",
147
+ author = "Evgrafova, Natalia and
148
+ De Langhe, Loic
149
+ and
150
+ Hoste, Veronique and
151
+ Lefever, Els ",
152
+ editor = "Chistova, Elena and
153
+ Cimiano, Philipp and
154
+ Haddadan, Shohreh and
155
+ Lapesa, Gabriella and
156
+ Ruiz-Dolz, Ramon",
157
+ booktitle = "Proceedings of the 12th Argument mining Workshop",
158
+ month = jul,
159
+ year = "2025",
160
+ address = "Vienna, Austria",
161
+ publisher = "Association for Computational Linguistics",
162
+ url = "https://aclanthology.org/2025.argmining-1.16/",
163
+ doi = "10.18653/v1/2025.argmining-1.16",
164
+ pages = "168--180",
165
+ ISBN = "979-8-89176-258-9",
166
+ abstract = "Definition generation models trained on dictionary data are generally expected to produce neutral and unbiased output while capturing the contextual nuances. However, previous studies have shown that generated definitions can inherit biases from both the underlying models and the input context. This paper examines the extent to which stance-related bias in argumentative data influences the generated definitions. In particular, we train a model on a slang-based dictionary to explore the feasibility of generating persuasive definitions that concisely reflect opposing parties' understandings of contested terms. Through this study, we provide new insights into bias propagation in definition generation and its implications for definition generation applications and argument mining."
167
+ }
168
+
169
+ ```