metadata
language:
- en
license: other
library_name: transformers
datasets:
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
max_new_tokens: 250
repetition_penalty: 1.1
model-index:
- name: Orca-2.0-Tau-1.8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.12
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.13
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.27
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.1
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.59
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.96
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
Orca-2.0-Tau-1.8B
We fine-tuned tau-1.8B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon. We use the ChatML prompt format.
Model Details
Model Description
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.
- Developed by: M4-ai
- Language(s) (NLP): English and maybe Chinese
- License: tongyi-qianwen license
- Finetuned from model: tau-1.8B
Uses
General purpose assistant, question answering, chain-of-thought, etc..
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Evaluation
Coming soon
Training Details
Training Data
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
Evaluations
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|---|---|
agieval_nous | N/A | none | 0 | acc | 0.2537 | ± | 0.0086 |
none | 0 | acc_norm | 0.2474 | ± | 0.0085 | ||
- agieval_aqua_rat | 1 | none | 0 | acc | 0.2283 | ± | 0.0264 |
none | 0 | acc_norm | 0.2441 | ± | 0.0270 | ||
- agieval_logiqa_en | 1 | none | 0 | acc | 0.2750 | ± | 0.0175 |
none | 0 | acc_norm | 0.3164 | ± | 0.0182 | ||
- agieval_lsat_ar | 1 | none | 0 | acc | 0.2087 | ± | 0.0269 |
none | 0 | acc_norm | 0.1739 | ± | 0.0250 | ||
- agieval_lsat_lr | 1 | none | 0 | acc | 0.1843 | ± | 0.0172 |
none | 0 | acc_norm | 0.2353 | ± | 0.0188 | ||
- agieval_lsat_rc | 1 | none | 0 | acc | 0.2602 | ± | 0.0268 |
none | 0 | acc_norm | 0.1784 | ± | 0.0234 | ||
- agieval_sat_en | 1 | none | 0 | acc | 0.3544 | ± | 0.0334 |
none | 0 | acc_norm | 0.2961 | ± | 0.0319 | ||
- agieval_sat_en_without_passage | 1 | none | 0 | acc | 0.3107 | ± | 0.0323 |
none | 0 | acc_norm | 0.2282 | ± | 0.0293 | ||
- agieval_sat_math | 1 | none | 0 | acc | 0.2727 | ± | 0.0301 |
none | 0 | acc_norm | 0.2091 | ± | 0.0275 | ||
truthfulqa_mc2 | 2 | none | 0 | acc | 0.3923 | ± | 0.0139 |
Training Hyperparameters
- Training regime: bf16 non-mixed precision
Technical Specifications
Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 45.20 |
AI2 Reasoning Challenge (25-Shot) | 37.12 |
HellaSwag (10-Shot) | 61.13 |
MMLU (5-Shot) | 45.27 |
TruthfulQA (0-shot) | 39.10 |
Winogrande (5-shot) | 59.59 |
GSM8k (5-shot) | 28.96 |