These are the GGUF version of plamo-2-1b.

Built with PLaMo

Currently, only F32 model works with https://github.com/ggml-org/llama.cpp/pull/14560, yet to be merged.

Downloads last month
115
GGUF
Model size
1.5B params
Architecture
plamo2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yt-koike/plamo-2-1b-gguf

Base model

pfnet/plamo-2-1b
Quantized
(1)
this model