Fine-tuned from vinai/phobert-base on visolex/VLSP2018-ABSA-Restaurant for joint aspect detection + sentiment classification (shared heads).

Model Details

  • Base Model: vinai/phobert-base
  • Dataset: visolex/VLSP2018-ABSA-Restaurant
  • Fine-tuning framework: HuggingFace Transformers
  • Model type: Transformer-based for Aspect-Based Sentiment Analysis (Multi-label classification).

Aspect Labels:

  • AMBIENCE#GENERAL
  • DRINKS#PRICES
  • DRINKS#QUALITY
  • DRINKS#STYLE&OPTIONS
  • FOOD#PRICES
  • FOOD#QUALITY
  • FOOD#STYLE&OPTIONS
  • LOCATION#GENERAL
  • RESTAURANT#GENERAL
  • RESTAURANT#MISCELLANEOUS
  • RESTAURANT#PRICES
  • SERVICE#GENERAL

Sentiment Labels:

  • POSITIVE
  • NEGATIVE
  • NEUTRAL

Hyperparameters

  • Batch size: 32
  • Learning rate: 3e-5
  • Epochs: 100
  • Max sequence length: 256
  • Early stopping patience: 5

Usage

import torch
from transformers import AutoTokenizer, AutoModel

# Danh sách aspect và sentiment labels
aspect_labels = [
    "AMBIENCE#GENERAL", "DRINKS#PRICES", "DRINKS#QUALITY", "DRINKS#STYLE&OPTIONS",
    "FOOD#PRICES", "FOOD#QUALITY", "FOOD#STYLE&OPTIONS", "LOCATION#GENERAL",
    "RESTAURANT#GENERAL", "RESTAURANT#MISCELLANEOUS", "RESTAURANT#PRICES",
    "SERVICE#GENERAL"
]
sentiment_labels = ["POSITIVE", "NEGATIVE", "NEUTRAL"]

# Load tokenizer và model (phải về đúng class TransformerForABSA)
repo = "visolex/phobert-absa-restaurant"
tokenizer = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
model = AutoModel.from_pretrained(repo, trust_remote_code=True)
model.eval()

def predict_absa_multi(
    text: str,
    aspect_labels: list[str],
    sentiment_labels: list[str],
    threshold: float = 0.5
) -> list[tuple[str,str]]:
    inputs = tokenizer(
        text,
        return_tensors="pt",
        padding=True,
        truncation=True,
        max_length=256
    )
    inputs.pop("token_type_ids", None)

    with torch.no_grad():
        out = model(**inputs)

    # out.logits có shape [1, A, S+1]
    logits = out.logits.squeeze(0)
    probs = torch.softmax(logits, dim=-1)

    num_s = len(sentiment_labels)
    none_id = probs.size(-1) - 1
    results = []

    for i, asp in enumerate(aspect_labels):
        prob_i = probs[i]
        pred_id = int(prob_i.argmax().item())

        if pred_id != none_id and pred_id < num_s:
            score = prob_i[pred_id].item()
            if score >= threshold:
                results.append((asp, sentiment_labels[pred_id].lower()))

    return results

# Example usage
text = "Món ăn ở đây rất ngon nhưng giá hơi mắc một chút và phục vụ cũng khá chậm."
preds = predict_absa_multi(text, aspect_labels, sentiment_labels, threshold=0.2)
print(preds)
# Expected output similar to: [('FOOD#QUALITY', 'positive'), ('FOOD#PRICES', 'negative'), ('SERVICE#GENERAL', 'negative')]
Downloads last month
81
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for visolex/phobert-absa-restaurant

Base model

vinai/phobert-base
Finetuned
(118)
this model

Dataset used to train visolex/phobert-absa-restaurant

Collection including visolex/phobert-absa-restaurant