File size: 1,209 Bytes
6e228c7 42c5f15 6e228c7 b3c1d0e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- llama
- llama3
- causal-lm
- instruction-tuned
- hf-internal-testing
pipeline_tag: text-generation
---
# 🦙 LLaMA3.2-1B-Instruct
`pAce576/llama3.2-1b-Instruct` is a 1.2 billion parameter language model based on Meta's LLaMA3 architecture. This model has been instruction-tuned for conversational and general-purpose natural language generation tasks.
## 🧠 Model Details
- **Architecture**: LLaMA3.2 (custom 1.2B variant)
- **Base Model**: LLaMA3-like Transformer
- **Instruction Tuning**: Yes
- **Parameters**: ~1.2 billion
- **Layers**: Custom, designed for efficient inference on resource-constrained environments
- **Precision**: fp16 supported (also tested with int8/4-bit via quantization)
## 📚 Intended Use
This model is intended for:
- Dialogue generation
- Instruction following
- Story writing
- Light reasoning tasks
**Example usage:**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("pAce576/llama3.2-1b-Instruct")
tokenizer = AutoTokenizer.from_pretrained("pAce576/llama3.2-1b-Instruct")
#Your own generation function
|