File size: 1,519 Bytes
9d6713a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03bf648
 
 
 
 
 
 
 
 
 
 
 
9d6713a
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
library_name: transformers
tags:
- llama3.2
- math
- code
- text-generation-inference
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Flerovium-Llama-3B
pipeline_tag: text-generation
---
# **Flerovium-Llama-3B-GGUF**

> **Flerovium-Llama-3B** is a compact, general-purpose language model based on the powerful **llama 3.2** (llama) architecture. It is fine-tuned for a broad range of tasks including **mathematical reasoning**, **code generation**, and **natural language understanding**, making it a versatile choice for developers, students, and researchers seeking reliable performance in a lightweight model.

## Model File

| File Name                                | Size    | Format |
|------------------------------------------|---------|--------|
| Flerovium-Llama-3B.BF16.gguf             | 6.43 GB | BF16   |
| Flerovium-Llama-3B.F16.gguf              | 6.43 GB | F16    |
| Flerovium-Llama-3B.Q4_K_M.gguf           | 2.02 GB | Q4_K_M |
| Flerovium-Llama-3B.Q5_K_M.gguf           | 2.32 GB | Q5_K_M |
| .gitattributes                           | 1.78 kB | -      |
| README.md                                | 927 B   | -      |
| config.json                              | 31 B    | JSON   |

## Quants Usage 

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)