Edit model card

EMO-1.5B:

EMO-1.5B is a powerful language model designed to engage in emotionally intelligent conversations.

Overview

EMO-1.5B is a state-of-the-art conversational AI model with 1.5 billion parameters. It has been fine-tuned on a diverse corpus of emotional narratives, enabling it to perceive and respond to the emotional undertones present in user inputs. Whether you're seeking comfort, motivation, or simply an empathetic listener, EMO-1.5B is here to provide emotional support and guidance.

Key Features

  • Emotional Intelligence: EMO-1.5B can recognize and respond to various emotions, such as sadness, joy, anger, and fear, with appropriate emotional responses.
  • Contextual Understanding: The model considers the broader context of the conversation to provide relevant and emotionally resonant responses.
  • Empathetic Dialogue: EMO-1.5B excels at active listening, validating emotions, and offering compassionate advice or consolation when needed.
  • Adaptive Persona: The model can adapt its persona and communication style to match the user's emotional state, providing a personalized and tailored experience.

Usage

You can easily interact with EMO-1.5B using the provided example code:

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "OEvortex/EMO-1.5B",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/EMO-1.5B")

prompt = "Imagine you're helping someone who is feeling overwhelmed. How do you feel in this situation?"
messages = [
    {"role": "system", "content": "You are a helpful and emotional assistant that will always respond in EMO style"},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Downloads last month
29
Safetensors
Model size
1.84B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including OEvortex/EMO-1.5B