Ninja-v1-RP-expressive-GGUF

概要

Aratako/Ninja-v1-RP-expressive-v2の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。

Downloads last month
131
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Aratako/Ninja-v1-RP-expressive-v2-GGUF

Base model

Elizezen/Antler-7B
Quantized
(1)
this model

Spaces using Aratako/Ninja-v1-RP-expressive-v2-GGUF 2

Collection including Aratako/Ninja-v1-RP-expressive-v2-GGUF