π¦ LLaMA3.2-1B-Instruct
pAce576/llama3.2-1b-Instruct
is a 1.2 billion parameter language model based on Meta's LLaMA3 architecture. This model has been instruction-tuned for conversational and general-purpose natural language generation tasks.
π§ Model Details
- Architecture: LLaMA3.2 (custom 1.2B variant)
- Base Model: LLaMA3-like Transformer
- Instruction Tuning: Yes
- Parameters: ~1.2 billion
- Layers: Custom, designed for efficient inference on resource-constrained environments
- Precision: fp16 supported (also tested with int8/4-bit via quantization)
π Intended Use
This model is intended for:
- Dialogue generation
- Instruction following
- Story writing
- Light reasoning tasks
Example usage:
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("pAce576/llama3.2-1b-Instruct")
tokenizer = AutoTokenizer.from_pretrained("pAce576/llama3.2-1b-Instruct")
#Your own generation function