6.png

Flerovium-Llama-3B

Flerovium-Llama-3B is a compact, general-purpose language model based on the powerful llama 3.2 (llama) architecture. It is fine-tuned for a broad range of tasks including mathematical reasoning, code generation, and natural language understanding, making it a versatile choice for developers, students, and researchers seeking reliable performance in a lightweight model.

GGUF: https://huggingface.co/prithivMLmods/Flerovium-Llama-3B-GGUF


Key Features

  1. LLaMA 3.2 Backbone Built on Meta’s LLaMA 3.2 (3B) architecture, offering state-of-the-art performance in a compact footprint with better instruction-following and multilingual support.

  2. Multi-Task Fine-Tuning Finetuned on a modular and diverse dataset combining math, code, and general-purpose tasks—enabling clear explanations, problem solving, and practical utility.

  3. Strong Mathematical Reasoning Handles algebra, calculus, logic, and numerical problems with step-by-step clarity. Ideal for tutoring and academic use cases.

  4. Coding Capabilities Understands and generates clean, bug-free code in Python, JavaScript, C++, and more. Also excels at debugging, documentation, and logic explanations.

  5. General-Purpose Utility Performs well across everyday reasoning tasks—summarization, Q&A, content drafting, and structured generation (Markdown, LaTeX, JSON).

  6. Efficient & Deployable With only 3 billion parameters, Flerovium-Llama-3B is resource-efficient and suitable for local deployment, offline tools, and edge AI setups.


Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Flerovium-Llama-3B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain how to solve a quadratic equation step-by-step."

messages = [
    {"role": "system", "content": "You are a helpful AI assistant for math and coding."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • General-purpose text and reasoning
  • Math tutoring and problem-solving
  • Code generation, review, and debugging
  • Content drafting in Markdown, LaTeX, and JSON
  • Lightweight deployment in educational and developer environments

Limitations

  • Limited context length compared to large models (>7B)
  • May require prompt refinement for very complex code/math problems
  • Not ideal for long-form creative writing or deep conversational tasks
  • Knowledge is limited to training data (no real-time web search)

References

  1. LLaMA 3 Technical Report (Meta)
  2. YaRN: Efficient Context Window Extension of Large Language Models
Downloads last month
41
Safetensors
Model size
3.21B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Flerovium-Llama-3B

Finetuned
(455)
this model
Quantizations
1 model

Collection including prithivMLmods/Flerovium-Llama-3B