Original model card with detailed documentation, examples, presets and more.

This repo only has Q8 GGUF. For more GGUF options, please refer to one of these:

If you are interested in EXL2 quants, check these out:

Downloads last month
195
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dreamgen/lucid-v1-nemo-GGUF

Quantized
(7)
this model