GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints
Edit model card

image/png

THIS MODEL IS MADE FOR LEWD

SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED

This is a 4x13B MoE Llama2 model, one of the first (if not the first!).

Always, a big thanks to Charles Goddard who is the brain behind all of those new Mixtral model, and his amazing tools!

WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE BROKEN, PREFER Q4_0, Q5_0 or Q8_0!

Description

This repo contains quantized files of Llamix2-MLewd-4x13B, a very hot MoE of Llama2 model.

Models used

The list of model used and their activator/theme can be found here

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Special thanks to Sushi and Shena ♥

If you want to support me, you can here.

Downloads last month
91
GGUF
Model size
38.5B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .