Test set performance
- Top 1 Accuracy: 0.4346
- Top 3 Accuracy: 0.7677
- Top 1 Macro F1: 0.2668
- Top 3 Macro F1: 0.5669
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device="cuda:0"
model = "heegyu/TinyLlama-augesc-context-strategy"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)
example = """usr: Hi
sys[Question]: Hello, how are you today?
usr: I was scolded by my parents yesterday"""
inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)
label = logits.argmax(-1).item()
ESCONV_STRATEGY = [
"Question",
"Restatement or Paraphrasing",
"Reflection of feelings",
"Self-disclosure",
"Affirmation and Reassurance",
"Providing Suggestions",
"Information",
"Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}
print(id2label[label])
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.