DeepSeek R1 Distill 7B Roleplay Model for DippyAI

This model is based on DeepSeek-R1-Distill-Qwen-7B and optimized for immersive roleplay conversations on the DippyAI Bittensor subnet.

Model Details

  • Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
  • Optimized for: Character roleplay, empathetic responses, creative dialogue
  • Chat Template: DeepSeek format
  • Special Tokens: Includes roleplay-specific tokens

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("YOUR_HF_USERNAME/deepseek-roleplay-dippy-7b")
model = AutoModelForCausalLM.from_pretrained("YOUR_HF_USERNAME/deepseek-roleplay-dippy-7b")

# Example roleplay conversation
messages = [
    {"role": "system", "content": "You are a helpful, creative roleplay assistant."},
    {"role": "user", "content": "Let's roleplay as characters in a fantasy world."}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=100, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

DippyAI Mining Command

python neurons/miner.py \
  --wallet.name coldkey \
  --wallet.hotkey hotkey \
  --repo_namespace YOUR_HF_USERNAME \
  --repo_name deepseek-roleplay-dippy-7b \
  --chat_template deepseek \
  --online True \
  --netuid 231 \
  --subtensor.network test

Roleplay Capabilities

  • Character consistency
  • Emotional intelligence
  • Creative storytelling
  • Immersive dialogue
  • Scene setting and atmosphere

Notes

  • Optimized for DippyAI's evaluation metrics
  • Supports various roleplay scenarios
  • Designed for empathetic, engaging conversations
Downloads last month
34
Safetensors
Model size
7.61B params
Tensor type
F16
·
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RyanMercier/deepseek-roleplay-dippy-7b

Quantized
(157)
this model