File size: 5,269 Bytes
319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 7cc03a6 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 319caf5 45cff78 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
language:
- en
- ar
- zh
- nl
- fr
- ru
- es
- tr
tags:
- sentiment-analysis
- text-classification
- aspect-based-sentiment-analysis
- deberta
- pyabsa
- efficient
- lightweight
- production-ready
- no-llm
license: mit
pipeline_tag: text-classification
widget:
- text: >-
The user interface is brilliant, but the documentation is a total mess.
[SEP] user interface [SEP]
- text: >-
The user interface is brilliant, but the documentation is a total mess.
[SEP] documentation [SEP]
---
# State-of-the-Art Multilingual Sentiment Analysis
## Multilingual -> English, Chinese, Arabic, Dutch, French, Russian, Spanish, Turkish, etc.
Tired of the high costs, slow latency, and massive computational footprint of Large Language Models? This is the sentiment analysis model you've been waiting for.
**`deberta-v3-large-absa-v1.1`** delivers **state-of-the-art accuracy** for fine-grained sentiment analysis with the speed, efficiency, and simplicity of a classic encoder model. It represents a paradigm shift in production-ready AI: maximum performance with minimum operational burden.
### Why This Model?.
- **🎯 Wide Usage:** This model reaches **One million downloads** already! (Maybe) the most downloaded open-source ABSA model ever.
- **🏆 SOTA Performance:** Built on the powerful `DeBERTa-v3` architecture and fine-tuned with advanced, context-aware methods from PyABSA, this model achieves top-tier accuracy on complex sentiment tasks.
- **⚡ LLM-Free Efficiency:** No need for A100s or massive GPU clusters. This model runs inference at a fraction of the computational cost, enabling real-time performance on standard CPUs or modest GPUs.
- **💰 Lower Costs:** Slash your hosting and API call expenses. The small footprint and high efficiency translate directly to significant savings, whether you're a startup or an enterprise.
- **🚀 Production-Ready:** Lightweight, fast, and reliable. This model is built to be deployed at scale for applications that demand immediate and accurate sentiment feedback.
### Ideal Use Cases
This model excels where speed, cost, and precision are critical:
- **Real-time Social Media Monitoring:** Analyze brand sentiment towards specific product features as it happens.
- **Intelligent Customer Support:** Automatically route tickets based on the sentiment towards different aspects of a complaint.
- **Product Review Analysis:** Aggregate fine-grained feedback on thousands of reviews to identify precise strengths and weaknesses.
- **Market Intelligence:** Understand nuanced public opinion on key industry topics.
## How to Use
Getting started is incredibly simple. You can use the Hugging Face `pipeline` for a zero-effort implementation.
from transformers import pipeline
# Load the classifier pipeline - it's that easy.
```python
classifier = pipeline("text-classification", model="yangheng/deberta-v3-large-absa-v1.1")
sentence = "The food was exceptional, although the service was a bit slow."
```
# Analyze sentiment for the 'food' aspect
```python
result_food = classifier(sentence, text_pair="food")
result_food ->
{
'Negative': 0.989
'Neutral': 0.008
'Positive': 0.003
}
```
# Analyze sentiment for the 'service' aspect from the same sentence
```python
result_service = classifier("这部手机的性能差劲", text_pair="性能")
result_service = classifier("这台汽车的引擎推力强劲", text_pair="引擎")
```
## The Technology Behind the Performance
### Base Model
It starts with `microsoft/deberta-v3-large`, a highly optimized encoder known for its disentangled attention mechanism, which improves efficiency and performance over original BERT/RoBERTa models.
### Fine-Tuning Architecture
It employs the FAST-LCF-BERT backbone trained from the PyABSA framework. This introduces a Local Context Focus (LCF) layer that dynamically guides the model to concentrate on the words and phrases most relevant to the given aspect, dramatically improving contextual understanding and accuracy.
### Training Data
This model was trained on a robust, aggregated corpus of over 30,000 unique samples (augmented to ~180,000 examples) from canonical ABSA datasets, including SemEval-2014, SemEval-2016, MAMS, and more. The standard test sets were excluded to ensure fair and reliable benchmarking.
## Citation
If you use this model in your research or application, please cite the foundational work on the PyABSA framework.
### BibTeX Citation
```bibtex
@inproceedings{DBLP:conf/cikm/0008ZL23,
author = {Heng Yang and Chen Zhang and Ke Li},
title = {PyABSA: {A} Modularized Framework for Reproducible Aspect-based Sentiment Analysis},
booktitle = {Proceedings of the 32nd {ACM} International Conference on Information and Knowledge Management, {CIKM} 2023},
pages = {5117--5122},
publisher = {{ACM}},
year = {2023},
doi = {10.1145/3583780.3614752}
}
@article{YangZMT21,
author = {Heng Yang and Biqing Zeng and Mayi Xu and Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
} |