bhadresh-ft-enc
Fine-tuned version of bhadresh-savani/distilbert-base-uncased-emotion
on a mix of clean and imperceptibly perturbed emotion classification data. This model is designed to improve robustness against character-level adversarial attacks while retaining high accuracy on clean text.
Model Description
- Base model:
distilbert-base-uncased-emotion
- Fine-tuning data:
vlwk/emotion-perturbed
: lean and perturbed emotion classification inputs (perturbation types: homoglyphs, deletions, reorderings, invisible characters), perturbation budget 1 to 5. - Training epochs: 3
- Batch size: 16
- Learning rate: 2e-5
- Validation split: 10%
Intended Use
This model is intended for robust emotion classification under adversarial character-level noise. It is particularly useful for evaluating or defending against imperceptible text perturbations.
Usage
from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizerFast.from_pretrained("vlwk/bhadresh-ft-enc")
model = DistilBertForSequenceClassification.from_pretrained("vlwk/bhadresh-ft-enc")
inputs = tokenizer("I'm feeling great today!", return_tensors="pt")
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(-1).item()
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support