pAce576's picture
Update README.md
b3c1d0e verified
|
raw
history blame
1.21 kB
metadata
license: apache-2.0
language:
  - en
library_name: transformers
tags:
  - llama
  - llama3
  - causal-lm
  - instruction-tuned
  - hf-internal-testing
pipeline_tag: text-generation

πŸ¦™ LLaMA3.2-1B-Instruct

pAce576/llama3.2-1b-Instruct is a 1.2 billion parameter language model based on Meta's LLaMA3 architecture. This model has been instruction-tuned for conversational and general-purpose natural language generation tasks.

🧠 Model Details

  • Architecture: LLaMA3.2 (custom 1.2B variant)
  • Base Model: LLaMA3-like Transformer
  • Instruction Tuning: Yes
  • Parameters: ~1.2 billion
  • Layers: Custom, designed for efficient inference on resource-constrained environments
  • Precision: fp16 supported (also tested with int8/4-bit via quantization)

πŸ“š Intended Use

This model is intended for:

  • Dialogue generation
  • Instruction following
  • Story writing
  • Light reasoning tasks

Example usage:

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("pAce576/llama3.2-1b-Instruct")
tokenizer = AutoTokenizer.from_pretrained("pAce576/llama3.2-1b-Instruct")

#Your own generation function