---
language:
- en
pipeline_tag: text-generation
tags:
- quantized
- 2-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text2text-generation
- mistral
- roleplay
- merge
base_model: DZgas/GIGABATEMAN-7B
model_name: GIGABATEMAN-7B
model_creator: DZgas
quantized_by: DZgas
---
This is a GGUF variant of GIGABATEMAN-7B model. Use with koboldcpp (do not use GPT4ALL)
The most UNcensored model that I know.