jhu-clsp/ettin-decoder-400m
			Text Generation
			β’ 
		
	
				Updated
					
				
				β’ 
					
					37
				
	
				β’ 
					
					2
				
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Phase 2 of 3: Higher-quality filtered data with context extension (250B tokens) used for mid-training of Ettin models.
This dataset contains the mid-training phase data used to train all Ettin encoder and decoder models. This phase focuses on higher-quality filtered data and context length extension to 8K tokens. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
| Data Source | Tokens (B) | Percentage | Description | 
|---|---|---|---|
| DCLM (Dolmino) | 175.5 | 70.4% | High-quality filtered web crawl | 
| Starcoder | 38.4 | 15.4% | Code repositories and files | 
| Math (Dolmino) | 10.4 | 4.2% | Mathematical content (filtered) | 
| PeS2o | 8.3 | 3.3% | Scientific papers | 
| 6.2 | 2.5% | Social discussion threads | |
| Arxiv | 4.1 | 1.6% | Academic preprints | 
| StackExchange (Dolmino) | 2.7 | 1.1% | Q&A forums (filtered) | 
| Tulu Flan | 2.4 | 1.0% | Instruction-following data | 
| Books | 0.8 | 0.3% | Literature and reference books | 
| Wikipedia | 0.5 | 0.2% | Encyclopedia articles | 
| Total | 249.3 | 100.0% | Quality-focused mixture | 
For pre-training see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/ettin-extension-data',
    local='/tmp/ettin-extension-data',
    shuffle=True
)
# Access samples (note: these will be longer sequences)
for sample in dataset:
    text = sample['text']  # Up to 8K tokens
    # Process your data...
Each folder contains filtered, higher-quality data sources in MDS format:
arxiv/ - Academic papers from ArXivbooks/ - Literature and reference booksdclm_dolmino/ - Dolmino-filtered web crawl data (primary source)math_dolmino/ - Filtered mathematical contentpes2o/ - Scientific papersreddit/ - Reddit discussion threadsstackexchange_dolmino/ - Filtered StackExchange Q&Astarcoder/ - Code from GitHub repositories  tulu_flan/ - Instruction-following exampleswikipedia/ - Wikipedia articles@misc{weller2025seqvsseqopen,
      title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders}, 
      author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2507.11412},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.11412}, 
}