File size: 4,369 Bytes
c2c0651 73b8e30 c2c0651 5dff8af 73b8e30 c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af c2c0651 5dff8af |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
tags:
- llama-3.2
- fine-tuned
- conversational
- question-answering
- agentic-ai
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# Model Card for Llama-3.2-3B-Linkbox-Finetune
## Model Details
### Model Description
A fine-tuned version of Meta's Llama 3.2-3B model optimized for contextual understanding and link analysis in conversational AI applications. This model demonstrates enhanced performance in:
- Multi-turn dialogue systems
- Knowledge retrieval and synthesis:cite[4]
- Contextual link recognition and analysis
- Agentic workflow orchestration:cite[7]
**Developed by:** Sujal Tamrakar
**Model type:** Transformer-based language model with Grouped-Query Attention (GQA):cite[4]
**Language(s):** Primarily English, with capabilities in German, French, Italian, Portuguese, Hindi, Spanish, and Thai:cite[4]
**License:** Llama 3.2 Community License ([full terms](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE))
**Finetuned from:** meta-llama/Llama-3.2-3B-Instruct:cite[4]
### Model Sources
- **Repository:** [Your GitHub Repository Link]
- **Base Model:** [Meta Llama 3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
- **Demo:** [Link to Gradio/Streamlit Demo]
## Uses
### Direct Use
- Contextual link analysis in documents
- Multi-turn conversational agents
- Knowledge retrieval and synthesis systems
- Agentic workflow automation:cite[7]
### Downstream Use
- Enterprise knowledge management systems
- AI-powered research assistants
- Context-aware content recommendation engines
- Automated documentation analysis tools
### Out-of-Scope Use
- Medical/legal decision making
- Generating malicious content
- High-risk government applications
- Languages beyond supported list without proper safety testing:cite[4]
## Bias, Risks, and Limitations
- May reflect biases in pretraining data
- Limited knowledge cutoff (December 2023):cite[4]
- Potential hallucination in long-form generation
- Performance degradation on highly technical domains
### Recommendations
- Implement content filtering (e.g., Llama Guard 3):cite[7]
- Use constrained decoding techniques
- Monitor for factual accuracy in critical applications
- Conduct safety testing for target deployment languages:cite[4]
## How to Get Started
```bash
from transformers import pipeline
model_id = "suzall/llama-3.2-3b-linkbox-finetune"
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
messages = [{
"role": "user",
"content": "Analyze links in this text: [YOUR_TEXT]"
}]
outputs = pipe(messages, max_new_tokens=256)
```
## Training Details
### Training Data
- FineTome-100k dataset (conversational format)13
- in-specific link analysis corpus (10k samples)
- Synthetic data generated using Llama 3.1-8B13
### Training Procedure
- **Architecture:** LoRA fine-tuning with r=3213
- **Optimizer:** AdamW-8bit
- **Learning Rate:** 2e-4 with linear decay
- **Sequence Length:** 2048 tokens
- **Hardware:** NVIDIA A100 (40GB)
- **Training Time:** 8 GPU hours
#### Training Hyperparameters
```bash
TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
num_train_epochs=3,
learning_rate=2e-4,
bf16=True,
lr_scheduler_type="linear"
)
```
## Evaluation
### Benchmark Performance
| Benchmark | Score | Comparison |
|------------------|-------|-----------------|
| IFEval (Strict) | 78.2 | +1.3 vs base |
| LinkAnalysis-API | 89.4 | Custom metric |
| MMLU | 63.7 | -0.6 vs base |
## Environmental Impact
- **Carbon Emissions:** ~0.8 kgCO2eq (estimated)
- **Hardware:** 1×A100-40GB
- **Energy:** 2.5kWh (Renewable-powered)
## Technical Specifications
### Model Architecture
- Transformer-based with GQA5
- 3.21B parameters
- 32-layer decoder
- 4096 hidden dimension
- 128k token context window5
### Quantization Options
| Precision | Memory | Recommended Use |
|-----------|--------|---------------------|
| BF16 | 6.5GB | Full precision |
| FP8 | 3.2GB | Balanced |
| INT4 | 1.75GB | Edge deployment |
## Model Card Contact
- **Maintainer:** Sujal Tamrakar
- **Email:** [email protected] |