Llama-3-SS-Infused-R1776-70B

Overview

Llama-3-SS-Infused-R1776-70B is a 70B parameter merged model built on Meta's Llama 3 architecture. This model is an advanced composition that integrates reasoning capabilities and enhanced multilingual proficiency, particularly for English and Japanese tasks.

The model is based on yasu-oh/Llama-3-Swallow-Infused-R1776-70B, which itself was constructed by adding the ChatVector from tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4 (a Japanese-instruction-enhanced model) to the reasoning-focused perplexity-ai/r1-1776-distill-llama-70b (built on Llama 3.3).

Building on that foundation, we further infused the model with additional Japanese capabilities by adding the ChatVector from shisa-ai/shisa-v2-llama3.3-70b - another Llama 3.3-based instruction-tuned model?resulting in an even more powerful bilingual model.

This approach - adding the ChatVector from an instruction-tuned model into a reasoning-centric model - represents a novel strategy to enhance both reasoning and instruction-following capabilities in English and Japanese.

Merge Methodology

The final model was created using a weighted linear merge:

Llama-3-SS-Infused-R1776-70B =
  Llama-3-Swallow-Infused-R1776-70B + 0.4 * (
    shisa-v2-llama3.3-70b - Llama-3.3-70B-Instruct
  )
  • Base: yasu-oh/Llama-3-Swallow-Infused-R1776-70B
    • itself created by adding the ChatVector from tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4 to the distilled reasoning model r1-1776.
  • Delta: Difference between shisa-ai/shisa-v2-llama3.3-70b and meta-llama/Llama-3.3-70B-Instruct.
  • Merge Tool: MergeKit
  • Scaling Factor: α = 0.4

Before merging, we performed vocabulary unification by aligning the vocabulary of the added model to match the tokenizer of the base model. This step was implemented using yasu-oh/merge_tools, which ensures consistent tokenization across merged components and prevents token mismatches that could degrade model performance.

This methodology preserves the core reasoning abilities of R1776 while integrating Swallow’s and Shisa-v2's improvements in instruction-following and Japanese language performance.

Languages

  • English
  • Japanese

Key Features

  • Strong bilingual support for both English and Japanese tasks.
  • Enhanced reasoning and instruction-following capabilities.
  • Innovative ChatVector addition from instruction-tuned models to a reasoning-centric base.

Recommended Parameters

  • temperature: 0.6
  • top_p: 0.95
  • top_k: 40
  • min_p: 0.0

License

This model is distributed under the Meta Llama 3 Community License. Please review and comply with its terms: https://www.llama.com/llama3/license/

Key Restrictions Include:

  • Do not use this model to improve competing large language models (LLMs).
  • When reusing this model, include the phrase: "Built with Meta Llama 3."
  • Organizations with more than 700 million monthly active users (MAU) require a separate license from Meta.
  • Model names must include “Llama 3”.

Citations

If you use this model, please cite the original works:

Downloads last month
1
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yasu-oh/Llama-3-SS-Infused-R1776-70B