🦅 🐍 FalconMamba 7B
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
- Running on Zero63🐍
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 28Note FalconMamba technical report
tiiuae/falcon-mamba-7b
Text Generation • Updated • 6.99k • 214Note First strong attention free model for general purpose usage, based on Mamba1 architecture
tiiuae/falcon-mamba-7b-instruct
Text Generation • Updated • 3.18k • 64Note FalconMamba-7B fine-tuned on instruction data, for chat-like interaction with the model
tiiuae/falcon-mamba-7b-4bit
Text Generation • Updated • 175 • 11Note FalconMamba-7B quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-4bit
Updated • 106 • 11Note FalconMamba-7B-instruct quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-BF16-GGUF
Updated • 72 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-instruct-F16-GGUF
Updated • 105 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-instruct-Q8_0-GGUF
Updated • 119 • 5Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF
Updated • 4.86k • 4Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-BF16-GGUF
Updated • 70 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-F16-GGUF
Updated • 77 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-Q8_0-GGUF
Updated • 179 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-Q4_K_M-GGUF
Updated • 8 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-pre-decay
Updated • 10 • 3Note Pre-decay stage checkpoint useful for continuous pretraining