File size: 2,375 Bytes
c8fdb84
96d907e
c8fdb84
96d907e
 
c8fdb84
 
 
 
96d907e
 
 
 
 
 
 
 
 
 
c8fdb84
 
96d907e
 
 
 
 
c8fdb84
d29773c
 
 
 
96d907e
 
 
c8fdb84
 
96d907e
 
 
 
 
 
 
 
 
 
 
 
c8fdb84
96d907e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: apache-2.0
tags:
- unsloth
- Uncensored
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- roleplay
- conversational
datasets:
- iamketan25/roleplay-instructions-dataset
- N-Bot-Int/Iris-Uncensored-R1
- N-Bot-Int/Moshpit-Combined-R2-Uncensored
- N-Bot-Int/Mushed-Dataset-Uncensored
- N-Bot-Int/Muncher-R1-Uncensored
- N-Bot-Int/Millia-R1_DPO
language:
- en
base_model:
- N-Bot-Int/MiniMaid-L2
pipeline_tag: text-generation
metrics:
- character
---
# Support Us Through
  - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV)
  - [https://ko-fi.com/nexusnetworkint](Official Ko-FI link!)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/sTBfXV91g1pnAed24WdC7.png)
# GGUF Version
  **GGUF** with Quants! Allowing you to run models using KoboldCPP and other AI Environments!


# Quantizations:
| Quant Type    | Benefits                                          | Cons                                              |
|---------------|---------------------------------------------------|---------------------------------------------------|
| **Q4_K_M**    | βœ… Smallest size (fastest inference)              | ❌ Lowest accuracy compared to other quants      |
|               | βœ… Requires the least VRAM/RAM                    | ❌ May struggle with complex reasoning           |
|               | βœ… Ideal for edge devices & low-resource setups   | ❌ Can produce slightly degraded text quality    |
| **Q5_K_M**    | βœ… Better accuracy than Q4, while still compact   | ❌ Slightly larger model size than Q4            |
|               | βœ… Good balance between speed and precision       | ❌ Needs a bit more VRAM than Q4                 |
|               | βœ… Works well on mid-range GPUs                   | ❌ Still not as accurate as higher-bit models    |
| **Q8_0**      | βœ… Highest accuracy (closest to full model)       | ❌ Requires significantly more VRAM/RAM          |
|               | βœ… Best for complex reasoning & detailed outputs  | ❌ Slower inference compared to Q4 & Q5          |
|               | βœ… Suitable for high-end GPUs & serious workloads | ❌ Larger file size (takes more storage)         |

# Model Details:
  Read the Model details on huggingface
  [Model Detail Here!](https://huggingface.co/N-Bot-Int/MiniMaid-L3)