Mixtral-4x7B-DPO-RPChat is a model made primarely for RP (Roleplay), 2 RP model, 1 occult model and 1 DPO model for a MoE. Toppy was the base.

The DPO was here to help get more human reply

This is my first try at doing this, so don't hesitate to give feedback!

WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE BROKEN, PREFER Q4_0, Q5_0 or Q8_0!

Description

This repo contains fp16 files of Mixtral-4x7B-DPO-RPChat.

Models used

The list of model used and their activator/theme can be found here

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

If you want to support me, you can here.

Downloads last month
1,953
Safetensors
Model size
24.2B params
Tensor type
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Undi95/Mixtral-4x7B-DPO-RPChat

Quantizations
1 model