Goal:

Detection emotional from natural lanugage.

Usage:

messages = [
    {"role" : "user", "content" : "I want to play this game"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True
    enable_thinking = False, # you can set this to True but it won't get thinking!
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
#neutral

Base model: Qwen3-0.6B

Downloads last month
20
Safetensors
Model size
596M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for beyoru/Emotion_detection

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(138)
this model

Dataset used to train beyoru/Emotion_detection