Quantized using the default exllamav3 (0.0.1) quantization process.


Join our Discord! https://discord.gg/Nbv9pQ88Xb


BeaverAI proudly presents...

Star Command R 32B v1 🌟

An RP finetune of Command-R-8-2024

image/png

Links

Usage

  • Cohere Instruct format or Text Completion

Special Thanks

  • Mr. Gargle for the GPUs! Love you, brotha.
Downloads last month
0
Safetensors
Model size
10.4B params
Tensor type
FP16
Β·
I16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MetaphoricalCode/Star-Command-R-32B-v1-exl3-4bpw-hb6

Quantized
(10)
this model