File size: 3,662 Bytes
9b2ce70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4349a89
9b2ce70
4349a89
9b2ce70
 
 
 
 
4349a89
9b2ce70
 
 
 
4349a89
 
 
 
 
9b2ce70
4349a89
9b2ce70
4349a89
9b2ce70
 
 
4349a89
 
 
 
 
 
 
 
9b2ce70
4349a89
9b2ce70
4349a89
 
 
 
9b2ce70
 
 
 
 
 
 
4349a89
9b2ce70
4349a89
 
 
 
9b2ce70
 
 
 
 
4349a89
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
language:
  - en
  - ja
library_name: transformers
pipeline_tag: text-generation
license:
  - llama3.3
  - gemma
model_type: llama
base_model:
  - perplexity-ai/r1-1776-distill-llama-70b
  - tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4
  - meta-llama/Llama-3.3-70B-Instruct
base_model_relation: merge
---

# Llama-3-Swallow-Infused-R1776-70B

## Overview

**Llama-3-Swallow-Infused-R1776-70B** is a 70B parameter merged model built on Meta's **Llama 3** architecture. This model combines the distilled reasoning performance of `r1-1776-distill-llama-70b` with enhanced instruction-following capabilities from the `Swallow` model, making it particularly effective for both English and Japanese instruction tasks.

The foundation of this model leverages `perplexity-ai/r1-1776-distill-llama-70b`, a distilled model fine-tuned for reasoning tasks on top of Llama 3.3. To boost Japanese language proficiency and overall instruction alignment, we incorporated the ChatVector from `tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4`. This approach - **adding an instruction-tuned model’s ChatVector to a reasoning-centric model** - represents an innovative strategy to enhance the model's multilingual reasoning capabilities.

## Merge Methodology

This model was created using a weighted linear merge:
```
Llama-3-Swallow-Infused-R1776-70B =
  r1-1776-distill-llama-70b + 0.4 * (
    Swallow-70B-Instruct-v0.4 - Llama-3.3-70B-Instruct
  )
```
- **Base**: `perplexity-ai/r1-1776-distill-llama-70b`
  - A distilled reasoning-focused model built on Meta Llama 3.3.
- **Delta**: Difference between `tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4` and `meta-llama/Llama-3.3-70B-Instruct`.
- **Merge Tool**: [MergeKit](https://github.com/arcee-ai/mergekit)
- **Scaling Factor**: `α = 0.4`

Before merging, we performed vocabulary alignment to ensure consistency between the merged components. This step uses [yasu-oh/merge_tools](https://github.com/yasu-oh/merge_tools) to align the vocabulary of the added model with the tokenizer of the base model. This preprocessing step prevents token mismatches and preserves high-quality performance across merged models.

This methodology ensures that the reasoning backbone of R1776 is retained while integrating Swallow's enhancements in instruction tuning and Japanese language support.

## Languages

- English
- Japanese

## Key Features

- Bilingual support: robust performance for both English and Japanese tasks.
- Enhanced reasoning and instruction-following capabilities.
- Novel use of ChatVector addition from instruction-tuned models to a reasoning-centric base.

## Recommended Parameters

- `temperature`: 0.6
- `top_p`: 0.95
- `top_k`: 40
- `min_p`: 0.0

## License

This model is distributed under the **Meta Llama 3 Community License**.
Please review and comply with its terms:
[https://www.llama.com/llama3/license/](https://www.llama.com/llama3/license/)

**Key Restrictions Include**:

- Do not use this model to improve competing large language models (LLMs).
- When reusing this model, include the phrase: **"Built with Meta Llama 3."**
- Organizations with more than **700 million monthly active users (MAU)** require a separate license from Meta.
- Model names must include “Llama 3”.

## Citations

If you use this model, please cite the original works:

- [perplexity-ai/r1-1776-distill-llama-70b](https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b)
- [tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4)
- [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)