Datasets:

Languages:
English
ArXiv:
License:
orionweller nielsr HF Staff commited on
Commit
bac41ce
·
verified ·
1 Parent(s): 33ba201

Add task categories, tags, library name, and abstract to dataset card (#1)

Browse files

- Add task categories, tags, library name, and abstract to dataset card (c453169d5875561631b8f38cb39357e191a78444)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -1,8 +1,16 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
 
 
 
 
 
 
 
 
5
  ---
 
6
  # Ettin Decay Phase Data
7
 
8
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
@@ -14,6 +22,10 @@ language:
14
 
15
  This dataset contains the decay phase data used to train all [Ettin encoder and decoder models](https://huggingface.co/jhu-clsp). This final phase uses **premium data sources** with emphasis on **long-form content** and **educational materials**. The data is provided in **MDS format** ready for use with [Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
16
 
 
 
 
 
17
  ## 📊 Data Composition
18
 
19
  | Data Source | Tokens (B) | Percentage | Description |
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ task_categories:
6
+ - text-generation
7
+ library_name: streaming
8
+ tags:
9
+ - pretraining
10
+ - language-modeling
11
+ - encoder-decoder
12
  ---
13
+
14
  # Ettin Decay Phase Data
15
 
16
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
 
22
 
23
  This dataset contains the decay phase data used to train all [Ettin encoder and decoder models](https://huggingface.co/jhu-clsp). This final phase uses **premium data sources** with emphasis on **long-form content** and **educational materials**. The data is provided in **MDS format** ready for use with [Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
24
 
25
+ ## Abstract
26
+
27
+ The large language model (LLM) community focuses almost exclusively on decoder-only language models, since they are easier to use for text generation. However, a large subset of the community still uses encoder-only models for tasks such as classification or retrieval. Previous work has attempted to compare these architectures, but is forced to make comparisons with models that have different numbers of parameters, training techniques, and datasets. We introduce the SOTA open-data Ettin suite of models: paired encoder-only and decoder-only models ranging from 17 million parameters to 1 billion, trained on up to 2 trillion tokens. Using the same recipe for both encoder-only and decoder-only models produces SOTA recipes in both categories for their respective sizes, beating ModernBERT as an encoder and Llama 3.2 and SmolLM2 as decoders. Like previous work, we find that encoder-only models excel at classification and retrieval tasks while decoders excel at generative tasks. However, we show that adapting a decoder model to encoder tasks (and vice versa) through continued training is subpar compared to using only the reverse objective (i.e. a 400M encoder outperforms a 1B decoder on MNLI, and vice versa for generative tasks). We open-source all artifacts of this study including training data, training order segmented by checkpoint, and 200+ checkpoints to allow future work to analyze or extend all aspects of training.
28
+
29
  ## 📊 Data Composition
30
 
31
  | Data Source | Tokens (B) | Percentage | Description |