ronalhung's picture
Add SetFit ABSA model
828c813 verified
---
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: bench:Get your food to go, find a bench, and kick back with a plate of dumplings.
- text: comparison:Frankly, when you compare what you can have here for lunch, versus
McDs or so many other sandwich shops in the city, there is no comparison.
- text: ton:We had crawfish boiled and despite making a mess, it was a ton of fun
and quite tasty as well.
- text: traffic noise:It is set far from the small street it's on, and there is no
traffic noise.
- text: food:The only thing more wonderful than the food (which is exceptional) is
the service.
metrics:
- f1_micro
- f1_macro
- precision_macro
- recall_macro
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/all-MiniLM-L6-v2
model-index:
- name: SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: f1_micro
value: 0.8516772438803264
name: F1_Micro
- type: f1_macro
value: 0.8441110611976916
name: F1_Macro
- type: precision_macro
value: 0.8482610861593047
name: Precision_Macro
- type: recall_macro
value: 0.8409649439480325
name: Recall_Macro
---
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [ronalhung/setfit-absa-restaurants-aspect](https://huggingface.co/ronalhung/setfit-absa-restaurants-aspect)
- **SetFitABSA Polarity Model:** [ronalhung/setfit-absa-restaurants-polarity](https://huggingface.co/ronalhung/setfit-absa-restaurants-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'staff:But the staff was so horrible to us.'</li><li>"food:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"food:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li></ul> |
| no aspect | <ul><li>"factor:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"deficiencies:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"Teodora:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li></ul> |
## Evaluation
### Metrics
| Label | F1_Micro | F1_Macro | Precision_Macro | Recall_Macro |
|:--------|:---------|:---------|:----------------|:-------------|
| **all** | 0.8517 | 0.8441 | 0.8483 | 0.8410 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"ronalhung/setfit-absa-restaurants-aspect",
"ronalhung/setfit-absa-restaurants-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 19.4181 | 45 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 167 |
| aspect | 254 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 1 | 0.3998 | - |
| 0.0345 | 50 | 0.3187 | 0.3072 |
| 0.0689 | 100 | 0.2744 | 0.2600 |
| 0.1034 | 150 | 0.2494 | 0.2504 |
| 0.1378 | 200 | 0.2459 | 0.2408 |
| 0.1723 | 250 | 0.2242 | 0.2210 |
| 0.2068 | 300 | 0.1802 | 0.1815 |
| 0.2412 | 350 | 0.1085 | 0.1787 |
| 0.2757 | 400 | 0.0435 | 0.1918 |
| 0.3101 | 450 | 0.0143 | 0.1832 |
| 0.3446 | 500 | 0.0063 | 0.1971 |
| 0.3790 | 550 | 0.004 | 0.1945 |
| 0.4135 | 600 | 0.002 | 0.2005 |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- spaCy: 3.8.7
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->