---
language:
- en
pipeline_tag: text-generation
tags:
- quantized
- 2-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text2text-generation
- mistral
- roleplay
- merge
base_model: DZgas/GIGABATEMAN-7B
model_name: GIGABATEMAN-7B
model_creator: DZgas
quantized_by: DZgas
---
<img src="logo.png">

This is a GGUF variant of <a href=https://huggingface.co/DZgas/GIGABATEMAN-7B?not-for-all-audiences&#61;true>GIGABATEMAN-7B</a> model. Use with <a href=https://github.com/LostRuins/koboldcpp/releases>koboldcpp</a> (do not use GPT4ALL)

The most UNcensored model that I know.