Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -6,13 +6,13 @@ language:
|
|
6 |
# Ettin Decay Phase Data
|
7 |
|
8 |
[](https://opensource.org/licenses/MIT)
|
9 |
-
[](https://huggingface.co/jhu-clsp)
|
11 |
[](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
|
12 |
|
13 |
> **Phase 3 of 3**: Premium data sources for final training phase (100B tokens) following the ProLong recipe.
|
14 |
|
15 |
-
This dataset contains the decay phase data used to train all [
|
16 |
|
17 |
## 📊 Data Composition
|
18 |
|
@@ -89,17 +89,19 @@ This decay phase data is also used for **cross-objective training** experiments:
|
|
89 |
- **Phase 1**: [Pre-training Data](https://huggingface.co/datasets/jhu-clsp/ettin-pretraining-data) (1.7T tokens)
|
90 |
- **Phase 2**: [Mid-training Data](https://huggingface.co/datasets/jhu-clsp/ettin-extension-data) (250B tokens)
|
91 |
- **Training Order**: [Batch-level Data Order](https://huggingface.co/datasets/jhu-clsp/ettin-data-order)
|
92 |
-
- **Paper**:
|
93 |
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
|
94 |
|
95 |
## Citation
|
96 |
|
97 |
```bibtex
|
98 |
-
@misc{
|
99 |
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
|
100 |
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
|
101 |
year={2025},
|
102 |
-
|
103 |
-
|
|
|
|
|
104 |
}
|
105 |
```
|
|
|
6 |
# Ettin Decay Phase Data
|
7 |
|
8 |
[](https://opensource.org/licenses/MIT)
|
9 |
+
[](https://arxiv.org/abs/2507.11412)
|
10 |
[](https://huggingface.co/jhu-clsp)
|
11 |
[](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
|
12 |
|
13 |
> **Phase 3 of 3**: Premium data sources for final training phase (100B tokens) following the ProLong recipe.
|
14 |
|
15 |
+
This dataset contains the decay phase data used to train all [Ettin encoder and decoder models](https://huggingface.co/jhu-clsp). This final phase uses **premium data sources** with emphasis on **long-form content** and **educational materials**. The data is provided in **MDS format** ready for use with [Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
|
16 |
|
17 |
## 📊 Data Composition
|
18 |
|
|
|
89 |
- **Phase 1**: [Pre-training Data](https://huggingface.co/datasets/jhu-clsp/ettin-pretraining-data) (1.7T tokens)
|
90 |
- **Phase 2**: [Mid-training Data](https://huggingface.co/datasets/jhu-clsp/ettin-extension-data) (250B tokens)
|
91 |
- **Training Order**: [Batch-level Data Order](https://huggingface.co/datasets/jhu-clsp/ettin-data-order)
|
92 |
+
- **Paper**: [Arxiv link](https://arxiv.org/abs/2507.11412)
|
93 |
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
|
94 |
|
95 |
## Citation
|
96 |
|
97 |
```bibtex
|
98 |
+
@misc{weller2025seqvsseqopen,
|
99 |
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
|
100 |
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
|
101 |
year={2025},
|
102 |
+
eprint={2507.11412},
|
103 |
+
archivePrefix={arXiv},
|
104 |
+
primaryClass={cs.CL},
|
105 |
+
url={https://arxiv.org/abs/2507.11412},
|
106 |
}
|
107 |
```
|