Models Quantized with Q2B0
-
nebuxcloud/Falcon3-3B-Instruct-1.58bit-GGUF
Text Generation • 3B • Updated • 23 -
nebuxcloud/Falcon3-3B-Base-1.58bit-GGUF
Text Generation • 3B • Updated • 6 -
nebuxcloud/Falcon3-7B-Instruct-1.58bit-GGUF
Text Generation • 7B • Updated • 7 -
nebuxcloud/Falcon3-7B-Base-1.58bit-GGUF
Text Generation • 7B • Updated • 11