prithivMLmods's picture
Update README.md
03bf648 verified
metadata
library_name: transformers
tags:
  - llama3.2
  - math
  - code
  - text-generation-inference
license: apache-2.0
language:
  - en
base_model:
  - prithivMLmods/Flerovium-Llama-3B
pipeline_tag: text-generation

Flerovium-Llama-3B-GGUF

Flerovium-Llama-3B is a compact, general-purpose language model based on the powerful llama 3.2 (llama) architecture. It is fine-tuned for a broad range of tasks including mathematical reasoning, code generation, and natural language understanding, making it a versatile choice for developers, students, and researchers seeking reliable performance in a lightweight model.

Model File

File Name Size Format
Flerovium-Llama-3B.BF16.gguf 6.43 GB BF16
Flerovium-Llama-3B.F16.gguf 6.43 GB F16
Flerovium-Llama-3B.Q4_K_M.gguf 2.02 GB Q4_K_M
Flerovium-Llama-3B.Q5_K_M.gguf 2.32 GB Q5_K_M
.gitattributes 1.78 kB -
README.md 927 B -
config.json 31 B JSON

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png