Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ library_name: peft
|
|
8 |
|
9 |
These models are trained to generate neutral (noslang, wordnet, oxford) and biased (slang, all), stance-aware definitions.
|
10 |
|
11 |
-
These adapters should be used together with the base model *unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit* and
|
12 |
|
13 |
|
14 |
The models are instruction-tuned on the dictionary data:
|
@@ -36,7 +36,7 @@ The models expect a usage example and a keyword as input:
|
|
36 |
- keyword = "death penalty"
|
37 |
- example usage (argument) = "As long as death penalty is kept, this confirms that our society is founded on violence."
|
38 |
|
39 |
-
While we tested the models on the argumentative data to
|
40 |
If you want to generate neutral definitions, avoid using slang- and all- models, these models aim to capture contextual bias that reflects the attitude of the author (pro, contra) towards the keyword.
|
41 |
|
42 |
The following code can be used for the definition generation task:
|
@@ -126,6 +126,9 @@ for name, template in PROMPTS.items():
|
|
126 |
The best model for the general definition generation task is oxford-llama. The lower plausibility score for slang- and all- models means the definitions are biased. These models can be further used to explore the generation of
|
127 |
contextual definitions that capture stance-related bias (pro or contra the keyword).
|
128 |
|
|
|
|
|
|
|
129 |
| Model | BERTScoreF1 [%] | Plausibility [%] |
|
130 |
|-------------------|--------------------------|--------------|
|
131 |
| LT3/definitions-oxford-llama-8B-instruct | 88.2 | 84.5 |
|
@@ -139,7 +142,7 @@ contextual definitions that capture stance-related bias (pro or contra the keywo
|
|
139 |
|
140 |
### BibTeX entry and citation info
|
141 |
|
142 |
-
If you
|
143 |
|
144 |
```bibtex
|
145 |
@inproceedings{evgrafova-etal-2025-stance,
|
|
|
8 |
|
9 |
These models are trained to generate neutral (noslang, wordnet, oxford) and biased (slang, all), stance-aware definitions.
|
10 |
|
11 |
+
These adapters should be used together with the base model *unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit* and require *unsloth* installation.
|
12 |
|
13 |
|
14 |
The models are instruction-tuned on the dictionary data:
|
|
|
36 |
- keyword = "death penalty"
|
37 |
- example usage (argument) = "As long as death penalty is kept, this confirms that our society is founded on violence."
|
38 |
|
39 |
+
While we tested the models on the argumentative data to explore their potential to produce stance-aware definitions, they can be used for a general definition generation task for various contexts.
|
40 |
If you want to generate neutral definitions, avoid using slang- and all- models, these models aim to capture contextual bias that reflects the attitude of the author (pro, contra) towards the keyword.
|
41 |
|
42 |
The following code can be used for the definition generation task:
|
|
|
126 |
The best model for the general definition generation task is oxford-llama. The lower plausibility score for slang- and all- models means the definitions are biased. These models can be further used to explore the generation of
|
127 |
contextual definitions that capture stance-related bias (pro or contra the keyword).
|
128 |
|
129 |
+
The plausibility score is based on human annotations.
|
130 |
+
|
131 |
+
|
132 |
| Model | BERTScoreF1 [%] | Plausibility [%] |
|
133 |
|-------------------|--------------------------|--------------|
|
134 |
| LT3/definitions-oxford-llama-8B-instruct | 88.2 | 84.5 |
|
|
|
142 |
|
143 |
### BibTeX entry and citation info
|
144 |
|
145 |
+
If you use our models, feel free to copy the following BibTeX code to cite the paper:
|
146 |
|
147 |
```bibtex
|
148 |
@inproceedings{evgrafova-etal-2025-stance,
|