PyTorch
English
llama
hamishivi commited on
Commit
b4ef734
·
verified ·
1 Parent(s): d5580ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -10,7 +10,7 @@ license: llama3.1
10
  # Llama 3.1 RDS+ Tulu 3 Arena Hard 326k
11
 
12
  This is a model trained on 939k samples selected by RDS+ using Arena Hard samples from the [Tulu 3 unfiltered dataset](https://huggingface.co/datasets/hamishivi/tulu-3-unfiltered).
13
- For more details, please see the paper [Practical Large-Scale Data Selection for Instruction Tuning](todo) and [associated codebase](https://github.com/hamishivi/automated-instruction-selection).
14
 
15
  <center>
16
  <img src="https://huggingface.co/hamishivi/tulu-2-multitask-rrmax-326k-sft/resolve/main/image.png" alt="Practical Large-Scale Data Selection for Instruction Tuning logo" width="200px"/>
@@ -31,7 +31,7 @@ For more details, please see the paper [Practical Large-Scale Data Selection for
31
 
32
  ## Results
33
 
34
- For more results and analysis, please see [our paper](todo).
35
 
36
  | Method | MMLU | GSM8k | BBH | TydiQA | Codex | Squad | AlpacaEval | Average |
37
  |-----------------------|------:|------:|-----:|-------:|------:|------:|-----------:|--------:|
@@ -76,7 +76,8 @@ If you find this model or data is useful in your work, please cite it with:
76
  title={{Practical Large-Scale Data Selection for Instruction Tuning}},
77
  author={{Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and Pradeep Dasigi}}
78
  year={2025},
79
- eprint={todo},
 
80
  archivePrefix={arXiv},
81
  primaryClass={cs.CL}
82
  }
 
10
  # Llama 3.1 RDS+ Tulu 3 Arena Hard 326k
11
 
12
  This is a model trained on 939k samples selected by RDS+ using Arena Hard samples from the [Tulu 3 unfiltered dataset](https://huggingface.co/datasets/hamishivi/tulu-3-unfiltered).
13
+ For more details, please see the paper [Practical Large-Scale Data Selection for Instruction Tuning](https://arxiv.org/abs/2503.01807) and [associated codebase](https://github.com/hamishivi/automated-instruction-selection).
14
 
15
  <center>
16
  <img src="https://huggingface.co/hamishivi/tulu-2-multitask-rrmax-326k-sft/resolve/main/image.png" alt="Practical Large-Scale Data Selection for Instruction Tuning logo" width="200px"/>
 
31
 
32
  ## Results
33
 
34
+ For more results and analysis, please see [our paper](https://arxiv.org/abs/2503.01807).
35
 
36
  | Method | MMLU | GSM8k | BBH | TydiQA | Codex | Squad | AlpacaEval | Average |
37
  |-----------------------|------:|------:|-----:|-------:|------:|------:|-----------:|--------:|
 
76
  title={{Practical Large-Scale Data Selection for Instruction Tuning}},
77
  author={{Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and Pradeep Dasigi}}
78
  year={2025},
79
+ url={https://arxiv.org/abs/2503.01807},
80
+ eprint={2503.01807},
81
  archivePrefix={arXiv},
82
  primaryClass={cs.CL}
83
  }