Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Enferlain
/
ellmo
like
1
Safetensors
GGUF
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
No model card
Downloads last month
118
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In
to view the estimation
1-bit
IQ1_S_V
7.5 GB
IQ1_S_V
14.5 GB
IQ1_S_V
14.5 GB
2-bit
IQ2_XXS
19.8 GB
4-bit
IQ4_XS
4.84 GB
IQ4_XS
4.84 GB
IQ4_XS
5.77 GB
Q4_K_S
7.41 GB
Q4_K
10.8 GB
Q4_0
26.4 GB
Q4_0
26.4 GB
Q4_K_M
20.7 GB
Q4_K_M
21.6 GB
Q4_K_M
4.37 GB
Q4_K_M
7.78 GB
Q4_K_M
20.4 GB
5-bit
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
9.23 GB
Q5_K_M
14.2 GB
Q5_K_M
14.2 GB
Q5_K_M
4.78 GB
Q5_K_M
16.6 GB
Q5_K_M
16.6 GB
Q5_K_M
16.6 GB
6-bit
Q6_K
8.81 GB
Q6_K
7.37 GB
Q6_K
8.81 GB
Q6_K
8.81 GB
Q6_K
8.81 GB
Q6_K
13.8 GB
Q6_K
5.94 GB
Q6_K
5.94 GB
Q6_K
5.53 GB
Q6_K
10.7 GB
Q6_K
10.7 GB
Q6_K
5.94 GB
Q6_K
5.53 GB
8-bit
Q8_0
13.8 GB
Q8_0
13.8 GB
Q8_0
13.8 GB
Q8_0
13.8 GB
Q8_0
13.8 GB
Q8_0
11.4 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
Q8_0
7.7 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support