Fill-Mask
PyTorch
English
modernbert
orionweller commited on
Commit
3310347
·
verified ·
1 Parent(s): 0b386a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +959 -0
README.md ADDED
@@ -0,0 +1,959 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: fill-mask
6
+ ---
7
+ # Ettin: an Open Suite of Paired Encoders and Decoders
8
+
9
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
10
+ [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](https://arxiv.org/abs/2507.11412)
11
+ [![Models](https://img.shields.io/badge/🤗%20Hugging%20Face-12%20Models-blue)](https://huggingface.co/jhu-clsp)
12
+ [![Data](https://img.shields.io/badge/🤗%20Training%20Data-2T%20Tokens-green)](https://huggingface.co/datasets/jhu-clsp)
13
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black)](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
14
+
15
+ > 🎯 **TL;DR**: State-of-the-art paired encoder and decoder models (17M-1B params) trained identically for fair comparison with open data. Encoders beat ModernBERT. Decoders beat Llama 3.2/SmolLM2.
16
+
17
+ 📄 [Paper](https://arxiv.org/abs/2507.11412) | 🚀 [GitHub Repository](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
18
+
19
+ This model is part of the Ettin suite - the first collection of paired encoder-only and decoder-only models trained with identical data, architecture, and training recipes. Ettin enables fair comparisons between encoder and decoder architectures across multiple scales, providing state-of-the-art performance for open-data models in their respective size categories.
20
+
21
+ ## Table of Contents
22
+ - [Performance Highlights](#performance-highlights)
23
+ - [Quick Start](#quick-start)
24
+ - [Model Description](#model-description)
25
+ - [Training Data](#training-data)
26
+ - [Model Family](#model-family)
27
+ - [Encoder Models](#encoder-models)
28
+ - [Decoder Models](#decoder-models)
29
+ - [Cross-Objective Models](#cross-objective-models)
30
+ - [Accessing Training Checkpoints](#accessing-training-checkpoints)
31
+ - [Research Applications](#research-applications)
32
+ - [Training Details](#training-details)
33
+ - [Model Architecture](#model-architecture)
34
+ - [Usage Examples](#usage-examples)
35
+ - [Fine-tuning Examples](#fine-tuning-examples)
36
+ - [Citation](#citation)
37
+
38
+ ## 📊 Performance Highlights
39
+
40
+ ### Encoder Tasks (vs. ModernBERT)
41
+ - **GLUE Average**: 88.9 vs 88.4 (Base), 90.8 vs 90.4 (Large)
42
+ - **MTEB v2 English Retrieval**: 45.7 vs 43.9 (Base), 48.4 vs 47.0 (Large)
43
+ - **Code Search and Long Context**: Superior performance on CodeSearchNet and MLDR
44
+
45
+ ### Decoder Tasks (vs. SmolLM2 & Llama 3.2)
46
+ - **Average Score**: 46.2 vs 45.2 (SmolLM2-135M)
47
+ - **1B Model**: 59.0 vs 56.6 (Llama 3.2-1B)
48
+ - **Generative Tasks**: Competitive across all model sizes
49
+
50
+ ### Key Finding
51
+ **Architecture-specific advantages persist**: A 400M encoder outperforms a 1B decoder on classification tasks, while a 400M decoder outperforms a 1B encoder on generation tasks.
52
+
53
+ ## 🚀 Quick Start
54
+
55
+ ### Installation
56
+ ```bash
57
+ pip install torch>=1.9.0
58
+ # until the new pip release, install from main to use decoders (transformers>=4.54.X will contain it)
59
+ # encoders work with transformers>=4.48.0
60
+ pip install git+https://github.com/huggingface/transformers.git
61
+ ```
62
+
63
+ ### 30-Second Examples
64
+
65
+ **Encoder for Classification/Embeddings:**
66
+ ```python
67
+ from transformers import AutoTokenizer, AutoModel
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-encoder-150m")
70
+ model = AutoModel.from_pretrained("jhu-clsp/ettin-encoder-150m")
71
+ ```
72
+
73
+ **Decoder for Text Generation:**
74
+ ```python
75
+ from transformers import AutoTokenizer, AutoModelForCausalLM
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-150m")
78
+ model = AutoModelForCausalLM.from_pretrained("jhu-clsp/ettin-decoder-150m")
79
+ ```
80
+
81
+ ## Model Description
82
+
83
+ Ettin models are designed to provide a foundation for comparing encoder-only and decoder-only architectures. Unlike previous comparisons that were limited by different training data, architectures, and recipes, Ettin models use:
84
+
85
+ 1. **Identical training data** - Same high-quality mixture across all models
86
+ 2. **Open Training Data** - Data is available now with batch-level training data for each of the 250+ checkpoints
87
+ 3. **Matched architectures** - Only differing in attention patterns (bidirectional vs causal) and training objectives (MLM vs CLM)
88
+ 4. **Consistent training recipe** - Three-phase training with 2T tokens
89
+ 5. **Multiple scales** - From 17M to 1B parameters
90
+
91
+ This approach allows for true apples-to-apples comparisons between encoder and decoder models, revealing the inherent strengths of each architecture.
92
+
93
+ ## Training Data
94
+
95
+ The training data is publicly available and split across different phases:
96
+
97
+ - **Pre-training Data**: [jhu-clsp/ettin-pretraining-data](https://huggingface.co/datasets/jhu-clsp/ettin-pretraining-data) - 1.7T tokens of diverse data mixture
98
+ - **Mid-training/Extension Data**: [jhu-clsp/ettin-extension-data](https://huggingface.co/datasets/jhu-clsp/ettin-extension-data) - 250B tokens of higher-quality filtered data
99
+ - **Decay Phase Data**: [jhu-clsp/ettin-decay-data](https://huggingface.co/datasets/jhu-clsp/ettin-decay-data) - 100B tokens of premium data sources
100
+ - **Training Data Order**: [jhu-clsp/ettin-data-order](https://huggingface.co/datasets/jhu-clsp/ettin-data-order) - Batch-level training order (columns: input_ids, step)
101
+
102
+ ## Model Family
103
+
104
+ ### Encoder Models
105
+
106
+ | Size | Model | Parameters | Best For | Download |
107
+ |:-----|:------|:-----------|:---------|:---------|
108
+ | XXS | [ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) | 17M | Mobile/Edge devices | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-17m) |
109
+ | XS | [ettin-encoder-32m](https://huggingface.co/jhu-clsp/ettin-encoder-32m) | 32M | Fast inference | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-32m) |
110
+ | Small | [ettin-encoder-68m](https://huggingface.co/jhu-clsp/ettin-encoder-68m) | 68M | Balanced performance | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-68m) |
111
+ | Base | [ettin-encoder-150m](https://huggingface.co/jhu-clsp/ettin-encoder-150m) | 150M | Standard use cases | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-150m) |
112
+ | Large | [ettin-encoder-400m](https://huggingface.co/jhu-clsp/ettin-encoder-400m) | 400M | High accuracy needs | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-400m) |
113
+ | XL | [ettin-encoder-1b](https://huggingface.co/jhu-clsp/ettin-encoder-1b) | 1B | Best performance | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-1b) |
114
+
115
+ ### Decoder Models
116
+
117
+ | Size | Model | Parameters | Best For | Download |
118
+ |:-----|:------|:-----------|:---------|:---------|
119
+ | XXS | [ettin-decoder-17m](https://huggingface.co/jhu-clsp/ettin-decoder-17m) | 17M | Lightweight generation | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-17m) |
120
+ | XS | [ettin-decoder-32m](https://huggingface.co/jhu-clsp/ettin-decoder-32m) | 32M | Quick prototyping | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-32m) |
121
+ | Small | [ettin-decoder-68m](https://huggingface.co/jhu-clsp/ettin-decoder-68m) | 68M | Efficient generation | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-68m) |
122
+ | Base | [ettin-decoder-150m](https://huggingface.co/jhu-clsp/ettin-decoder-150m) | 150M | Standard generation | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-150m) |
123
+ | Large | [ettin-decoder-400m](https://huggingface.co/jhu-clsp/ettin-decoder-400m) | 400M | Quality generation | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-400m) |
124
+ | XL | [ettin-decoder-1b](https://huggingface.co/jhu-clsp/ettin-decoder-1b) | 1B | Best generation | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-1b) |
125
+
126
+ ### Cross-Objective Models
127
+
128
+ These models demonstrate what happens when you continue training encoders as decoders (and vice versa). **Important**: Load these models using the architecture they were *converted to*, not their original architecture.
129
+
130
+ #### Encoders Trained from Decoders (Decoder → MLM)
131
+ **Load as encoders** using `AutoModel` or `AutoModelForMaskedLM`:
132
+
133
+ | Size | Model | Parameters | Description | Download |
134
+ |:-----|:------|:-----------|:------------|:---------|
135
+ | XXS | [ettin-encoder-from-decoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-17m) | 17M | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-17m) |
136
+ | XS | [ettin-encoder-from-decoder-32m](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-32m) | 32M | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-32m) |
137
+ | Small | [ettin-encoder-from-decoder-68m](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-68m) | 68M | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-68m) |
138
+ | Base | [ettin-encoder-from-decoder-150m](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-150m) | 150M | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-150m) |
139
+ | Large | [ettin-encoder-from-decoder-400m](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-400m) | 400M | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-400m) |
140
+ | XL | [ettin-encoder-from-decoder-1b](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-1b) | 1B | Decoder → MLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-encoder-from-decoder-1b) |
141
+
142
+ #### Decoders Trained from Encoders (Encoder → CLM)
143
+ **Load as decoders** using `AutoModelForCausalLM`:
144
+
145
+ | Size | Model | Parameters | Description | Download |
146
+ |:-----|:------|:-----------|:------------|:---------|
147
+ | XXS | [ettin-decoder-from-encoder-17m](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-17m) | 17M | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-17m) |
148
+ | XS | [ettin-decoder-from-encoder-32m](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-32m) | 32M | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-32m) |
149
+ | Small | [ettin-decoder-from-encoder-68m](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-68m) | 68M | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-68m) |
150
+ | Base | [ettin-decoder-from-encoder-150m](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-150m) | 150M | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-150m) |
151
+ | Large | [ettin-decoder-from-encoder-400m](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-400m) | 400M | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-400m) |
152
+ | XL | [ettin-decoder-from-encoder-1b](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-1b) | 1B | Encoder → CLM continued training | [![Download](https://img.shields.io/badge/🤗-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-from-encoder-1b) |
153
+
154
+ **Example Usage for Cross-Objective Models:**
155
+ ```python
156
+ # Encoder-from-decoder: Load as encoder
157
+ from transformers import AutoTokenizer, AutoModel
158
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-encoder-from-decoder-150m")
159
+ model = AutoModel.from_pretrained("jhu-clsp/ettin-encoder-from-decoder-150m")
160
+
161
+ # Decoder-from-encoder: Load as decoder
162
+ from transformers import AutoTokenizer, AutoModelForCausalLM
163
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-from-encoder-150m")
164
+ model = AutoModelForCausalLM.from_pretrained("jhu-clsp/ettin-decoder-from-encoder-150m")
165
+ ```
166
+
167
+ ## Accessing Training Checkpoints
168
+
169
+ Beyond the final models listed above, we provide access to intermediate training checkpoints for research and analysis purposes. These checkpoints allow you to study model behavior and performance throughout the training process. You can get the checkpoints either in HF format or raw for continued pre-training (e.g. Composer format).
170
+
171
+ #### Raw Checkpoints
172
+ All raw training checkpoints are available in the [jhu-clsp/ettin-checkpoints](https://huggingface.co/datasets/jhu-clsp/ettin-checkpoints) dataset.
173
+
174
+ #### HuggingFace Format Checkpoints
175
+ Each model repository contains multiple tagged versions representing different training stages:
176
+
177
+ - **`step{number}`** - Pretraining phase checkpoints (e.g., `step599525`, `step596528`)
178
+ - **`ext{number}`** - Extension/mid-training phase checkpoints (e.g., `ext1000`, `ext2000`)
179
+ - **`decay{number}`** - Decay phase checkpoints (e.g., `decay100`, `decay500`)
180
+
181
+ ```python
182
+ from transformers import AutoTokenizer, AutoModelForCausalLM
183
+
184
+ # Load a specific pretraining checkpoint
185
+ model = AutoModelForCausalLM.from_pretrained(
186
+ "jhu-clsp/ettin-decoder-400m",
187
+ revision="step590532" # Specific checkpoint tag
188
+ )
189
+
190
+ # Load an extension phase checkpoint
191
+ model = AutoModelForCausalLM.from_pretrained(
192
+ "jhu-clsp/ettin-decoder-400m",
193
+ revision="ext1000"
194
+ )
195
+
196
+ # Load a decay phase checkpoint
197
+ model = AutoModelForCausalLM.from_pretrained(
198
+ "jhu-clsp/ettin-decoder-400m",
199
+ revision="decay100"
200
+ )
201
+ ```
202
+
203
+ This checkpoint availability enables detailed analysis of training dynamics, loss curves, and capability emergence across the complete 2T token training process.
204
+
205
+
206
+ ## 🔬 Research Applications
207
+
208
+ ### What Makes Ettin Unique
209
+
210
+ Ettin provides the first **controlled comparison** of encoder vs. decoder architectures:
211
+
212
+ - **Identical Training Data**: Same 2T token mixture across all models
213
+ - **Matched Architectures**: Only attention patterns and objectives differ
214
+ - **Open Everything**: Training data, model weights, and batch-level training order
215
+ - **Multiple Scales**: Fair comparison from 17M to 1B parameters
216
+ - **250+ Checkpoints**: Complete training trajectory analysis
217
+
218
+ ### Use Cases for Researchers
219
+
220
+ - **Architecture Studies**: Compare encoder vs decoder capabilities fairly
221
+ - **Training Dynamics**: Analyze 250+ checkpoints with batch-level data ordering
222
+ - **Scaling Laws**: Study how architectural advantages change with scale
223
+ - **Transfer Learning**: Investigate cross-objective training effectiveness
224
+ - **Replication Studies**: First open replication of ModernBERT training recipe
225
+
226
+ ### Reproducibility
227
+
228
+ All training artifacts are publicly available:
229
+ - Training data with exact batch ordering
230
+ - Model checkpoints every 8.5B tokens
231
+ - Complete hyperparameter configurations
232
+ - Training code and evaluation scripts
233
+
234
+ ## Training Details
235
+
236
+ **Data:** High-quality mixture including DCLM, Dolma v1.7, scientific papers, code, and curated sources totaling 2T+ tokens
237
+
238
+ **Architecture:** Transformer with RoPE, GLU activations, and prenorm layers
239
+
240
+ **Training Phases:**
241
+ - **Pre-training**: 1.7T tokens with diverse data mixture
242
+ - **Mid-training**: 250B tokens with higher-quality filtered data and context extension to 8K
243
+ - **Decay phase**: 100B tokens with premium data sources
244
+
245
+ **Key Features:**
246
+ - Context length: Up to 8K tokens
247
+ - Vocabulary: 50,368 tokens (ModernBERT tokenizer)
248
+ - Deep but efficient architectures following MobileLLM principles
249
+
250
+ ## Model Architecture
251
+
252
+ | Parameter | 17M | 32M | 68M | 150M | 400M | 1B |
253
+ |:----------|:----|:----|:----|:-----|:-----|:---|
254
+ | Layers | 7 | 10 | 19 | 22 | 28 | 28 |
255
+ | Hidden Size | 256 | 384 | 512 | 768 | 1024 | 1792 |
256
+ | Intermediate Size | 384 | 576 | 768 | 1152 | 2624 | 3840 |
257
+ | Attention Heads | 4 | 6 | 8 | 12 | 16 | 28 |
258
+
259
+
260
+
261
+ ## Usage Examples
262
+
263
+ ### Encoder: Masked Language Modeling
264
+ <details>
265
+ <summary>Click to expand <strong>encoder</strong> usage examples</summary>
266
+
267
+ ```python
268
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
269
+ import torch
270
+
271
+ # Load MLM model
272
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-encoder-150m")
273
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/ettin-encoder-150m")
274
+
275
+ def predict_masked_token(text):
276
+ inputs = tokenizer(text, return_tensors="pt")
277
+ with torch.no_grad():
278
+ outputs = model(**inputs)
279
+
280
+ # Get predictions for [MASK] tokens
281
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
282
+ predictions = outputs.logits[mask_indices]
283
+
284
+ # Get top 5 predictions
285
+ top_tokens = torch.topk(predictions, 5, dim=-1)
286
+ return [tokenizer.decode(token) for token in top_tokens.indices[0]]
287
+
288
+ # Example
289
+ masked_text = "The capital of France is [MASK]."
290
+ predictions = predict_masked_token(masked_text)
291
+ print(f"Predictions: {predictions}")
292
+ ```
293
+
294
+ </details>
295
+
296
+ ### Decoder: Text Generation
297
+
298
+ <details>
299
+ <summary>Click to expand <strong>decoder text generation</strong></summary>
300
+
301
+ ```python
302
+ from transformers import AutoTokenizer, AutoModelForCausalLM
303
+ import torch
304
+
305
+ # Load model and tokenizer
306
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-150m")
307
+ model = AutoModelForCausalLM.from_pretrained("jhu-clsp/ettin-decoder-150m")
308
+
309
+ # Set pad token if needed
310
+ if tokenizer.pad_token is None:
311
+ tokenizer.pad_token = tokenizer.eos_token
312
+
313
+ def generate_text(prompt, max_length=100, temperature=0.7):
314
+ inputs = tokenizer(prompt, return_tensors="pt")
315
+
316
+ with torch.no_grad():
317
+ outputs = model.generate(
318
+ inputs.input_ids,
319
+ max_length=max_length,
320
+ temperature=temperature,
321
+ do_sample=True,
322
+ pad_token_id=tokenizer.eos_token_id,
323
+ num_return_sequences=1
324
+ )
325
+
326
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
327
+
328
+ # Example usage
329
+ prompt = "The future of artificial intelligence is"
330
+ generated = generate_text(prompt)
331
+ print(generated)
332
+ ```
333
+
334
+ </details>
335
+
336
+
337
+ ## Fine-tuning Examples
338
+
339
+ ### Encoders
340
+ <details><summary>Click to see how to finetune this into a dense embedding model using Sentence Transformers</summary>
341
+
342
+ ```python
343
+ import argparse
344
+
345
+ from datasets import load_dataset
346
+ from sentence_transformers import (
347
+ SentenceTransformer,
348
+ SentenceTransformerTrainer,
349
+ SentenceTransformerTrainingArguments,
350
+ )
351
+ from sentence_transformers.evaluation import TripletEvaluator
352
+ from sentence_transformers.losses import CachedMultipleNegativesRankingLoss
353
+ from sentence_transformers.training_args import BatchSamplers
354
+
355
+ def main():
356
+ # parse the lr & model name
357
+ parser = argparse.ArgumentParser()
358
+ parser.add_argument("--lr", type=float, default=8e-5)
359
+ parser.add_argument("--model_name", type=str, default="jhu-clsp/ettin-encoder-150m")
360
+ args = parser.parse_args()
361
+ lr = args.lr
362
+ model_name = args.model_name
363
+ model_shortname = model_name.split("/")[-1]
364
+
365
+ # 1. Load a model to finetune
366
+ model = SentenceTransformer(model_name)
367
+
368
+ # 2. Load a dataset to finetune on
369
+ dataset = load_dataset(
370
+ "sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1",
371
+ "triplet-hard",
372
+ split="train",
373
+ )
374
+ dataset_dict = dataset.train_test_split(test_size=1_000, seed=12)
375
+ train_dataset = dataset_dict["train"].select(range(1_250_000))
376
+ eval_dataset = dataset_dict["test"]
377
+
378
+ # 3. Define a loss function
379
+ loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16) # Increase mini_batch_size if you have enough VRAM
380
+
381
+ run_name = f"{model_shortname}-DPR-{lr}"
382
+ # 4. (Optional) Specify training arguments
383
+ args = SentenceTransformerTrainingArguments(
384
+ # Required parameter:
385
+ output_dir=f"output/{model_shortname}/{run_name}",
386
+ # Optional training parameters:
387
+ num_train_epochs=1,
388
+ per_device_train_batch_size=512,
389
+ per_device_eval_batch_size=512,
390
+ warmup_ratio=0.05,
391
+ fp16=False, # Set to False if GPU can't handle FP16
392
+ bf16=True, # Set to True if GPU supports BF16
393
+ batch_sampler=BatchSamplers.NO_DUPLICATES, # (Cached)MultipleNegativesRankingLoss benefits from no duplicates
394
+ learning_rate=lr,
395
+ # Optional tracking/debugging parameters:
396
+ save_strategy="steps",
397
+ save_steps=500,
398
+ save_total_limit=2,
399
+ logging_steps=500,
400
+ run_name=run_name, # Used in `wandb`, `tensorboard`, `neptune`, etc. if installed
401
+ )
402
+
403
+ # 5. (Optional) Create an evaluator & evaluate the base model
404
+ dev_evaluator = TripletEvaluator(
405
+ anchors=eval_dataset["query"],
406
+ positives=eval_dataset["positive"],
407
+ negatives=eval_dataset["negative"],
408
+ name="msmarco-co-condenser-dev",
409
+ )
410
+ dev_evaluator(model)
411
+
412
+ # 6. Create a trainer & train
413
+ trainer = SentenceTransformerTrainer(
414
+ model=model,
415
+ args=args,
416
+ train_dataset=train_dataset,
417
+ eval_dataset=eval_dataset,
418
+ loss=loss,
419
+ evaluator=dev_evaluator,
420
+ )
421
+ trainer.train()
422
+
423
+ # 7. (Optional) Evaluate the trained model on the evaluator after training
424
+ dev_evaluator(model)
425
+
426
+ # 8. Save the model
427
+ model.save_pretrained(f"output/{model_shortname}/{run_name}/final")
428
+
429
+ # 9. (Optional) Push it to the Hugging Face Hub
430
+ model.push_to_hub(run_name, private=False)
431
+
432
+ if __name__ == "__main__":
433
+ main()
434
+ ```
435
+ </details>
436
+
437
+
438
+ <details><summary>Click to see how to finetune this into a multi-vector embedding model with PyLate</summary>
439
+
440
+ ```python
441
+ from datasets import load_dataset
442
+ from pylate import losses, models, utils
443
+ from sentence_transformers import (
444
+ SentenceTransformerTrainer,
445
+ SentenceTransformerTrainingArguments,
446
+ )
447
+
448
+ def main():
449
+ # Load the datasets required for knowledge distillation (train, queries, documents)
450
+ train = load_dataset(
451
+ path="lightonai/ms-marco-en-bge",
452
+ name="train",
453
+ )
454
+
455
+ queries = load_dataset(
456
+ path="lightonai/ms-marco-en-bge",
457
+ name="queries",
458
+ )
459
+
460
+ documents = load_dataset(
461
+ path="lightonai/ms-marco-en-bge",
462
+ name="documents",
463
+ )
464
+
465
+ # Set the transformation to load the documents/queries texts using the corresponding ids on the fly
466
+ train.set_transform(
467
+ utils.KDProcessing(queries=queries, documents=documents).transform,
468
+ )
469
+
470
+ # Define the base model, training parameters, and output directory
471
+ num_train_epochs = 1
472
+ lr = 8e-5
473
+ batch_size = 16
474
+ accum_steps = 1
475
+ model_name = "jhu-clsp/ettin-encoder-150m"
476
+ model_shortname = model_name.split("/")[-1]
477
+
478
+ # Set the run name for logging and output directory
479
+ run_name = f"{model_shortname}-colbert-KD-{lr}"
480
+ output_dir = f"output/{model_shortname}/{run_name}"
481
+
482
+ # Initialize the ColBERT model from the base model
483
+ model = models.ColBERT(model_name_or_path=model_name)
484
+
485
+ # Configure the training arguments (e.g., epochs, batch size, learning rate)
486
+ args = SentenceTransformerTrainingArguments(
487
+ output_dir=output_dir,
488
+ num_train_epochs=num_train_epochs,
489
+ per_device_train_batch_size=batch_size,
490
+ fp16=False, # Set to False if you get an error that your GPU can't run on FP16
491
+ bf16=True, # Set to True if you have a GPU that supports BF16
492
+ run_name=run_name,
493
+ logging_steps=10,
494
+ learning_rate=lr,
495
+ gradient_accumulation_steps=accum_steps,
496
+ warmup_ratio=0.05,
497
+ )
498
+
499
+ # Use the Distillation loss function for training
500
+ train_loss = losses.Distillation(model=model)
501
+
502
+ # Initialize the trainer
503
+ trainer = SentenceTransformerTrainer(
504
+ model=model,
505
+ args=args,
506
+ train_dataset=train,
507
+ loss=train_loss,
508
+ data_collator=utils.ColBERTCollator(tokenize_fn=model.tokenize),
509
+ )
510
+
511
+ # Start the training process
512
+ trainer.train()
513
+
514
+ model.save_pretrained(f"{output_dir}/final")
515
+
516
+ if __name__ == "__main__":
517
+ main()
518
+
519
+ ```
520
+ </details>
521
+
522
+ <details><summary>Click to see how to finetune this into a sparse retrieval model using Sentence Transformers</summary>
523
+
524
+ ```python
525
+ import logging
526
+
527
+ from datasets import load_dataset
528
+
529
+ from sentence_transformers import (
530
+ SparseEncoder,
531
+ SparseEncoderModelCardData,
532
+ SparseEncoderTrainer,
533
+ SparseEncoderTrainingArguments,
534
+ )
535
+ from sentence_transformers.sparse_encoder.evaluation import SparseNanoBEIREvaluator
536
+ from sentence_transformers.sparse_encoder.losses import SparseMultipleNegativesRankingLoss, SpladeLoss
537
+ from sentence_transformers.training_args import BatchSamplers
538
+
539
+ logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO)
540
+
541
+ # 1. Load a model to finetune with 2. (Optional) model card data
542
+ model = SparseEncoder(
543
+ "jhu-clsp/ettin-encoder-150m",
544
+ model_card_data=SparseEncoderModelCardData(
545
+ language="en",
546
+ license="apache-2.0",
547
+ )
548
+ )
549
+
550
+ # 3. Load a dataset to finetune on
551
+ full_dataset = load_dataset("sentence-transformers/natural-questions", split="train").select(range(100_000))
552
+ dataset_dict = full_dataset.train_test_split(test_size=1_000, seed=12)
553
+ train_dataset = dataset_dict["train"]
554
+ eval_dataset = dataset_dict["test"]
555
+
556
+ # 4. Define a loss function
557
+ loss = SpladeLoss(
558
+ model=model,
559
+ loss=SparseMultipleNegativesRankingLoss(model=model),
560
+ query_regularizer_weight=5e-5,
561
+ document_regularizer_weight=3e-5,
562
+ )
563
+
564
+ # 5. (Optional) Specify training arguments
565
+ run_name = "splade-distilbert-base-uncased-nq"
566
+ args = SparseEncoderTrainingArguments(
567
+ # Required parameter:
568
+ output_dir=f"models/{run_name}",
569
+ # Optional training parameters:
570
+ num_train_epochs=1,
571
+ per_device_train_batch_size=16,
572
+ per_device_eval_batch_size=16,
573
+ learning_rate=2e-5,
574
+ warmup_ratio=0.1,
575
+ fp16=True, # Set to False if you get an error that your GPU can't run on FP16
576
+ bf16=False, # Set to True if you have a GPU that supports BF16
577
+ batch_sampler=BatchSamplers.NO_DUPLICATES, # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch
578
+ # Optional tracking/debugging parameters:
579
+ eval_strategy="steps",
580
+ eval_steps=1000,
581
+ save_strategy="steps",
582
+ save_steps=1000,
583
+ save_total_limit=2,
584
+ logging_steps=200,
585
+ run_name=run_name, # Will be used in W&B if `wandb` is installed
586
+ )
587
+
588
+ # 6. (Optional) Create an evaluator & evaluate the base model
589
+ dev_evaluator = SparseNanoBEIREvaluator(dataset_names=["msmarco", "nfcorpus", "nq"], batch_size=16)
590
+
591
+ # 7. Create a trainer & train
592
+ trainer = SparseEncoderTrainer(
593
+ model=model,
594
+ args=args,
595
+ train_dataset=train_dataset,
596
+ eval_dataset=eval_dataset,
597
+ loss=loss,
598
+ evaluator=dev_evaluator,
599
+ )
600
+ trainer.train()
601
+
602
+ # 8. Evaluate the model performance again after training
603
+ dev_evaluator(model)
604
+
605
+ # 9. Save the trained model
606
+ model.save_pretrained(f"models/{run_name}/final")
607
+
608
+ # 10. (Optional) Push it to the Hugging Face Hub
609
+ model.push_to_hub(run_name)
610
+
611
+ ```
612
+ </details>
613
+
614
+ <details><summary>Click to see how to finetune this into a reranker model using Sentence Transformers</summary>
615
+
616
+ ```python
617
+ import logging
618
+ import traceback
619
+
620
+ import torch
621
+ from datasets import load_dataset
622
+
623
+ from sentence_transformers import SentenceTransformer
624
+ from sentence_transformers.cross_encoder import (
625
+ CrossEncoder,
626
+ CrossEncoderModelCardData,
627
+ CrossEncoderTrainer,
628
+ CrossEncoderTrainingArguments,
629
+ )
630
+ from sentence_transformers.cross_encoder.evaluation import (
631
+ CrossEncoderNanoBEIREvaluator,
632
+ CrossEncoderRerankingEvaluator,
633
+ )
634
+ from sentence_transformers.cross_encoder.losses import BinaryCrossEntropyLoss
635
+ from sentence_transformers.evaluation import SequentialEvaluator
636
+ from sentence_transformers.util import mine_hard_negatives
637
+
638
+ # Set the log level to INFO to get more information
639
+ logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO)
640
+
641
+
642
+ def main():
643
+ model_name = "jhu-clsp/ettin-encoder-150m"
644
+
645
+ train_batch_size = 64
646
+ num_epochs = 1
647
+ num_hard_negatives = 5 # How many hard negatives should be mined for each question-answer pair
648
+
649
+ # 1a. Load a model to finetune with 1b. (Optional) model card data
650
+ model = CrossEncoder(
651
+ model_name,
652
+ model_card_data=CrossEncoderModelCardData(
653
+ language="en",
654
+ license="apache-2.0",
655
+ ),
656
+ )
657
+ print("Model max length:", model.max_length)
658
+ print("Model num labels:", model.num_labels)
659
+
660
+ # 2a. Load the GooAQ dataset: https://huggingface.co/datasets/sentence-transformers/gooaq
661
+ logging.info("Read the gooaq training dataset")
662
+ full_dataset = load_dataset("sentence-transformers/gooaq", split="train").select(range(100_000))
663
+ dataset_dict = full_dataset.train_test_split(test_size=1_000, seed=12)
664
+ train_dataset = dataset_dict["train"]
665
+ eval_dataset = dataset_dict["test"]
666
+ logging.info(train_dataset)
667
+ logging.info(eval_dataset)
668
+
669
+ # 2b. Modify our training dataset to include hard negatives using a very efficient embedding model
670
+ embedding_model = SentenceTransformer("sentence-transformers/static-retrieval-mrl-en-v1", device="cpu")
671
+ hard_train_dataset = mine_hard_negatives(
672
+ train_dataset,
673
+ embedding_model,
674
+ num_negatives=num_hard_negatives, # How many negatives per question-answer pair
675
+ margin=0, # Similarity between query and negative samples should be x lower than query-positive similarity
676
+ range_min=0, # Skip the x most similar samples
677
+ range_max=100, # Consider only the x most similar samples
678
+ sampling_strategy="top", # Sample the top negatives from the range
679
+ batch_size=4096, # Use a batch size of 4096 for the embedding model
680
+ output_format="labeled-pair", # The output format is (query, passage, label), as required by BinaryCrossEntropyLoss
681
+ use_faiss=True,
682
+ )
683
+ logging.info(hard_train_dataset)
684
+
685
+ # 2c. (Optionally) Save the hard training dataset to disk
686
+ # hard_train_dataset.save_to_disk("gooaq-hard-train")
687
+ # Load again with:
688
+ # hard_train_dataset = load_from_disk("gooaq-hard-train")
689
+
690
+ # 3. Define our training loss.
691
+ # pos_weight is recommended to be set as the ratio between positives to negatives, a.k.a. `num_hard_negatives`
692
+ loss = BinaryCrossEntropyLoss(model=model, pos_weight=torch.tensor(num_hard_negatives))
693
+
694
+ # 4a. Define evaluators. We use the CrossEncoderNanoBEIREvaluator, which is a light-weight evaluator for English reranking
695
+ nano_beir_evaluator = CrossEncoderNanoBEIREvaluator(
696
+ dataset_names=["msmarco", "nfcorpus", "nq"],
697
+ batch_size=train_batch_size,
698
+ )
699
+
700
+ # 4b. Define a reranking evaluator by mining hard negatives given query-answer pairs
701
+ # We include the positive answer in the list of negatives, so the evaluator can use the performance of the
702
+ # embedding model as a baseline.
703
+ hard_eval_dataset = mine_hard_negatives(
704
+ eval_dataset,
705
+ embedding_model,
706
+ corpus=full_dataset["answer"], # Use the full dataset as the corpus
707
+ num_negatives=30, # How many documents to rerank
708
+ batch_size=4096,
709
+ include_positives=True,
710
+ output_format="n-tuple",
711
+ use_faiss=True,
712
+ )
713
+ logging.info(hard_eval_dataset)
714
+ reranking_evaluator = CrossEncoderRerankingEvaluator(
715
+ samples=[
716
+ {
717
+ "query": sample["question"],
718
+ "positive": [sample["answer"]],
719
+ "documents": [sample[column_name] for column_name in hard_eval_dataset.column_names[2:]],
720
+ }
721
+ for sample in hard_eval_dataset
722
+ ],
723
+ batch_size=train_batch_size,
724
+ name="gooaq-dev",
725
+ # Realistic setting: only rerank the positives that the retriever found
726
+ # Set to True to rerank *all* positives
727
+ always_rerank_positives=False,
728
+ )
729
+
730
+ # 4c. Combine the evaluators & run the base model on them
731
+ evaluator = SequentialEvaluator([reranking_evaluator, nano_beir_evaluator])
732
+ evaluator(model)
733
+
734
+ # 5. Define the training arguments
735
+ short_model_name = model_name if "/" not in model_name else model_name.split("/")[-1]
736
+ run_name = f"reranker-{short_model_name}-gooaq-bce"
737
+ args = CrossEncoderTrainingArguments(
738
+ # Required parameter:
739
+ output_dir=f"models/{run_name}",
740
+ # Optional training parameters:
741
+ num_train_epochs=num_epochs,
742
+ per_device_train_batch_size=train_batch_size,
743
+ per_device_eval_batch_size=train_batch_size,
744
+ learning_rate=2e-5,
745
+ warmup_ratio=0.1,
746
+ fp16=False, # Set to False if you get an error that your GPU can't run on FP16
747
+ bf16=True, # Set to True if you have a GPU that supports BF16
748
+ dataloader_num_workers=4,
749
+ load_best_model_at_end=True,
750
+ metric_for_best_model="eval_gooaq-dev_ndcg@10",
751
+ # Optional tracking/debugging parameters:
752
+ eval_strategy="steps",
753
+ eval_steps=1000,
754
+ save_strategy="steps",
755
+ save_steps=1000,
756
+ save_total_limit=2,
757
+ logging_steps=200,
758
+ logging_first_step=True,
759
+ run_name=run_name, # Will be used in W&B if `wandb` is installed
760
+ seed=12,
761
+ )
762
+
763
+ # 6. Create the trainer & start training
764
+ trainer = CrossEncoderTrainer(
765
+ model=model,
766
+ args=args,
767
+ train_dataset=hard_train_dataset,
768
+ loss=loss,
769
+ evaluator=evaluator,
770
+ )
771
+ trainer.train()
772
+
773
+ # 7. Evaluate the final model, useful to include these in the model card
774
+ evaluator(model)
775
+
776
+ # 8. Save the final model
777
+ final_output_dir = f"models/{run_name}/final"
778
+ model.save_pretrained(final_output_dir)
779
+
780
+ # 9. (Optional) save the model to the Hugging Face Hub!
781
+ # It is recommended to run `huggingface-cli login` to log into your Hugging Face account first
782
+ try:
783
+ model.push_to_hub(run_name)
784
+ except Exception:
785
+ logging.error(
786
+ f"Error uploading model to the Hugging Face Hub:\n{traceback.format_exc()}To upload it manually, you can run "
787
+ f"`huggingface-cli login`, followed by loading the model using `model = CrossEncoder({final_output_dir!r})` "
788
+ f"and saving it using `model.push_to_hub('{run_name}')`."
789
+ )
790
+
791
+
792
+ if __name__ == "__main__":
793
+ main()
794
+
795
+ ```
796
+ </details>
797
+
798
+ ### Decoders
799
+
800
+ <details>
801
+ <summary>Click to expand decoder training code</summary>
802
+
803
+ # Full training
804
+ ```bash
805
+ python trl/scripts/sft.py \
806
+ --model_name_or_path jhu-clsp/ettin-decoder-17m \
807
+ --dataset_name trl-lib/Capybara \
808
+ --learning_rate 2.0e-5 \
809
+ --num_train_epochs 1 \
810
+ --packing \
811
+ --per_device_train_batch_size 2 \
812
+ --gradient_accumulation_steps 8 \
813
+ --gradient_checkpointing \
814
+ --eos_token '<|im_end|>' \
815
+ --eval_strategy steps \
816
+ --eval_steps 100 \
817
+ --output_dir ettin-decoder-17m \
818
+ --push_to_hub
819
+ ```
820
+
821
+ # LoRA
822
+ ```bash
823
+ python trl/scripts/sft.py \
824
+ --model_name_or_path jhu-clsp/ettin-decoder-17m \
825
+ --dataset_name trl-lib/Capybara \
826
+ --learning_rate 2.0e-4 \
827
+ --num_train_epochs 1 \
828
+ --packing \
829
+ --per_device_train_batch_size 2 \
830
+ --gradient_accumulation_steps 8 \
831
+ --gradient_checkpointing \
832
+ --eos_token '<|im_end|>' \
833
+ --eval_strategy steps \
834
+ --eval_steps 100 \
835
+ --use_peft \
836
+ --lora_r 32 \
837
+ --lora_alpha 16 \
838
+ --output_dir ettin-decoder-17m \
839
+ --push_to_hub
840
+ ```
841
+
842
+ with `sft.py`:
843
+ ```python
844
+ import argparse
845
+
846
+ from datasets import load_dataset
847
+ from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
848
+ from transformers.models.auto.modeling_auto import MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES
849
+
850
+ from trl import (
851
+ ModelConfig,
852
+ ScriptArguments,
853
+ SFTConfig,
854
+ SFTTrainer,
855
+ TrlParser,
856
+ clone_chat_template,
857
+ get_kbit_device_map,
858
+ get_peft_config,
859
+ get_quantization_config,
860
+ )
861
+
862
+
863
+ def main(script_args, training_args, model_args):
864
+ ################
865
+ # Model init kwargs & Tokenizer
866
+ ################
867
+ quantization_config = get_quantization_config(model_args)
868
+ model_kwargs = dict(
869
+ revision=model_args.model_revision,
870
+ trust_remote_code=model_args.trust_remote_code,
871
+ attn_implementation=model_args.attn_implementation,
872
+ torch_dtype=model_args.torch_dtype,
873
+ use_cache=False if training_args.gradient_checkpointing else True,
874
+ device_map=get_kbit_device_map() if quantization_config is not None else None,
875
+ quantization_config=quantization_config,
876
+ )
877
+
878
+ # Create model
879
+ config = AutoConfig.from_pretrained(model_args.model_name_or_path)
880
+ valid_image_text_architectures = MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES.values()
881
+
882
+ if config.architectures and any(arch in valid_image_text_architectures for arch in config.architectures):
883
+ from transformers import AutoModelForImageTextToText
884
+
885
+ model_kwargs.pop("use_cache", None) # Image models do not support cache
886
+ model = AutoModelForImageTextToText.from_pretrained(model_args.model_name_or_path, **model_kwargs)
887
+ else:
888
+ model = AutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, **model_kwargs)
889
+
890
+ # Create tokenizer
891
+ tokenizer = AutoTokenizer.from_pretrained(
892
+ model_args.model_name_or_path, trust_remote_code=model_args.trust_remote_code, use_fast=True
893
+ )
894
+
895
+ # Set default chat template if needed
896
+ if tokenizer.chat_template is None:
897
+ # TODO: source should be passed as an argument
898
+ model, tokenizer = clone_chat_template(model, tokenizer, "Qwen/Qwen3-0.6B")
899
+
900
+ ################
901
+ # Dataset
902
+ ################
903
+ dataset = load_dataset(script_args.dataset_name, name=script_args.dataset_config)
904
+
905
+ ################
906
+ # Training
907
+ ################
908
+ trainer = SFTTrainer(
909
+ model=model,
910
+ args=training_args,
911
+ train_dataset=dataset[script_args.dataset_train_split],
912
+ eval_dataset=dataset[script_args.dataset_test_split] if training_args.eval_strategy != "no" else None,
913
+ processing_class=tokenizer,
914
+ peft_config=get_peft_config(model_args),
915
+ )
916
+
917
+ trainer.train()
918
+
919
+ # Save and push to hub
920
+ trainer.save_model(training_args.output_dir)
921
+ if training_args.push_to_hub:
922
+ trainer.push_to_hub(dataset_name=script_args.dataset_name)
923
+
924
+
925
+ def make_parser(subparsers: argparse._SubParsersAction = None):
926
+ dataclass_types = (ScriptArguments, SFTConfig, ModelConfig)
927
+ if subparsers is not None:
928
+ parser = subparsers.add_parser("sft", help="Run the SFT training script", dataclass_types=dataclass_types)
929
+ else:
930
+ parser = TrlParser(dataclass_types)
931
+ return parser
932
+
933
+
934
+ if __name__ == "__main__":
935
+ parser = make_parser()
936
+ # When using the trl cli, this script may be run with additional arguments, corresponding accelerate arguments.
937
+ # To ensure that their parsing does not interfere with the script arguments, parse the arguments with
938
+ # `return_remaining_strings=True`, then ignore the remaining strings.
939
+ script_args, training_args, model_args, _ = parser.parse_args_and_config(return_remaining_strings=True)
940
+ main(script_args, training_args, model_args)
941
+
942
+ ```
943
+ </details>
944
+
945
+ ## Citation
946
+
947
+ If you use Ettin models in your research, please cite our work:
948
+
949
+ ```bibtex
950
+ @misc{weller2025seqvsseqopen,
951
+ title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
952
+ author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
953
+ year={2025},
954
+ eprint={2507.11412},
955
+ archivePrefix={arXiv},
956
+ primaryClass={cs.CL},
957
+ url={https://arxiv.org/abs/2507.11412},
958
+ }
959
+ ```