|
|
|
|
|
This work is licensed under the **Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License**. |
|
|
|
To view a copy of this license, visit [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/) or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. |
|
|
|
|
|
|
|
|
|
You are free to: |
|
- **Share** — copy and redistribute the material in any medium or format. |
|
- **Adapt** — remix, transform, and build upon the material. |
|
|
|
**Under the following terms:** |
|
- **Attribution (BY):** You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. |
|
- **Non-Commercial (NC):** You may not use the material for commercial purposes. |
|
- **ShareAlike (SA):** If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. |
|
|
|
**No additional restrictions:** You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. |
|
|
|
|
|
|
|
|
|
When redistributing or adapting this work, you must include the following attribution in a clear and visible manner: |
|
|
|
``` |
|
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). |
|
Original works: |
|
- Base Model: [https://huggingface.co/llm-jp/llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) (Apache License 2.0) |
|
- Datasets: |
|
- [Aratako/HelpSteer2-Preferences-formatted](https://huggingface.co/datasets/Aratako/HelpSteer2-Preferences-formatted) |
|
- [Aratako/Magpie-Tanuki-Instruction-Selected-Evolved-26.5k](https://huggingface.co/datasets/Aratako/Magpie-Tanuki-Instruction-Selected-Evolved-26.5k) |
|
- [Aratako/Magpie-Tanuki-Qwen2.5-72B-Answered](https://huggingface.co/datasets/Aratako/Magpie-Tanuki-Qwen2.5-72B-Answered) |
|
- [Aratako/Open-Platypus-Japanese-masked-formatted](https://huggingface.co/datasets/Aratako/Open-Platypus-Japanese-masked-formatted) |
|
- [Aratako/Self-Instruct-Qwen2.5-72B-Instruct-60k](https://huggingface.co/datasets/Aratako/Self-Instruct-Qwen2.5-72B-Instruct-60k) |
|
- [Aratako/Synthetic-JP-EN-Coding-Dataset-801k-50k](https://huggingface.co/datasets/Aratako/Synthetic-JP-EN-Coding-Dataset-801k-50k) |
|
- [Aratako/aya-ja-evol-instruct-calm3-dpo-masked-formatted](https://huggingface.co/datasets/Aratako/aya-ja-evol-instruct-calm3-dpo-masked-formatted) |
|
- [Aratako/iterative-dpo-data-for-ORPO-iter3](https://huggingface.co/datasets/Aratako/iterative-dpo-data-for-ORPO-iter3) |
|
- [Aratako/iterative-dpo-data-for-SimPO-iter2](https://huggingface.co/datasets/Aratako/iterative-dpo-data-for-SimPO-iter2) |
|
- [Aratako/magpie-qwen2.5-32b-reasoning-100k-formatted](https://huggingface.co/datasets/Aratako/magpie-qwen2.5-32b-reasoning-100k-formatted) |
|
- [Aratako/magpie-reasoning-llama-nemotron-70b-100k-filtered](https://huggingface.co/datasets/Aratako/magpie-reasoning-llama-nemotron-70b-100k-filtered) |
|
- [Aratako/magpie-ultra-v0.1-formatted](https://huggingface.co/datasets/Aratako/magpie-ultra-v0.1-formatted) |
|
- [Aratako/orca-agentinstruct-1M-v1-selected](https://huggingface.co/datasets/Aratako/orca-agentinstruct-1M-v1-selected) |
|
- [DeL-TaiseiOzaki/Tengentoppa-sft-qwen2.5-32b-reasoning-100k](https://huggingface.co/datasets/DeL-TaiseiOzaki/Tengentoppa-sft-qwen2.5-32b-reasoning-100k) |
|
- [cl-nagoya/auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) |
|
- [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/) |
|
- [kanhatakeyama/ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3) |
|
- [kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja) |
|
- [llm-jp/magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0) |
|
- [saillab/alpaca-japanese-cleaned](https://huggingface.co/datasets/saillab/alpaca-japanese-cleaned) |
|
- [tokutsu/japanese-tasks1000](https://huggingface.co/datasets/tokutsu/japanese-tasks1000) |
|
- Used Models for scoring: |
|
- [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Built with Qwen) |
|
- [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) (Built with Llama) |
|
- Used Models for datasets (including third parties' datasets): |
|
- [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) |
|
- [Aratako/Llama-Gemma-2-27b-CPO_SimPO-iter1](https://huggingface.co/Aratako/Llama-Gemma-2-27b-CPO_SimPO-iter1) |
|
- [Aratako/Llama-Gemma-2-27b-CPO_SimPO-iter2](https://huggingface.co/Aratako/Llama-Gemma-2-27b-CPO_SimPO-iter2) |
|
- Google Cloud Translation |
|
- [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) (Built with Qwen) |
|
- [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) (Built with Qwen) |
|
- [Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8) (Built with Qwen) |
|
- [WizardLM 8x22b](https://github.com/nlpxucan/WizardLM) |
|
- [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) |
|
- [cyberagent/calm3-22b-chat](https://huggingface.co/cyberagent/calm3-22b-chat) |
|
- [meta-llama/Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) (Built with Llama) |
|
- [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) (Built with Llama) |
|
- [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) (Built with Llama) |
|
- [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) (Built with Llama) |
|
- [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/Llama-Guard-3-8B) |
|
- [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) |
|
- [nvidia/Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct) |
|
- [team-hatakeyama-phase2/Tanuki-8x8B-dpo-v1.0-GPTQ-8bit](https://huggingface.co/team-hatakeyama-phase2/Tanuki-8x8B-dpo-v1.0-GPTQ-8bit) |
|
- [team-hatakeyama-phase2/tanuki-8B-exp007](https://huggingface.co/team-hatakeyama-phase2/tanuki-8B-exp007) |
|
- [tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1) |
|
- [weblab-GENIAC/Tanuki-8B-dpo-v1.0](https://huggingface.co/weblab-GENIAC/Tanuki-8B-dpo-v1.0) |
|
This work: |
|
- Model: CC BY-NC-SA 4.0 |
|
- Creator: tokutsu |
|
``` |
|
|
|
--- |
|
|
|
**Disclaimer:** |
|
The materials are provided \"as is\", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. |
|
|