MLMvsCLM

non-profit

AI & ML interests

None defined yet.

Recent Activity

Should We Still Pretrain Encoders with Masked Language Modeling?

This page gathers all the artefacts and references related to the paper Should We Still Pretrain Encoders with Masked Language Modeling? (Gisserot-Boukhlef et al.).

MLMvsCLM

Abstract

Learning high-quality text representations is fundamental to a wide range of NLP tasks. While encoder pretraining has traditionally relied on Masked Language Modeling (MLM), recent evidence suggests that decoder models pretrained with Causal Language Modeling (CLM) can be effectively repurposed as encoders, often surpassing traditional encoders on text representation benchmarks. However, it remains unclear whether these gains reflect an inherent advantage of the CLM objective or arise from confounding factors such as model and data scale. In this paper, we address this question through a series of large-scale, carefully controlled pretraining ablations, training a total of 30 models ranging from 210 million to 1 billion parameters, and conducting over 15,000 fine-tuning and evaluation runs. We find that while training with MLM generally yields better performance across text representation tasks, CLM-trained models are more data-efficient and demonstrate improved fine-tuning stability. Building on these findings, we experimentally show that a biphasic training strategy that sequentially applies CLM and then MLM, achieves optimal performance under a fixed computational training budget. Moreover, we demonstrate that this strategy becomes more appealing when initializing from readily available pretrained CLM models (from the existing LLM ecosystem), reducing the computational burden needed to train best-in-class encoder models.

Resources

  • Preprint: For the full details of our work
  • Blog post: A quick overview if you only have 5 minutes
  • EuroBERT: The encoder model architecture used in our experiments
  • Training codebase: Optimus, our distributed framework for training encoders at scale
  • Evaluation codebase: EncodEval, our framework for evaluating encoder models across a wide range of representation tasks

Models

We release all the models trained and evaluated in the paper.

  • Model names follow the format [model size]-[objective]-[number of steps]: e.g., 610m-clm-42k refers to a 610M-parameter model trained with CLM for 42k steps.
  • For models trained in two stages, names follow the extended format [model size]-[objective #1]-[number of steps #1]-[objective #2]-[number of steps #2], [number of steps #2] indicates the total number of training steps: e.g., 610m-clm-10k-mlm40-42k is a 610M model first trained with CLM for 10k steps, then further trained with MLM (using a 40% masking ratio) for an additional 32k steps, totaling 42k.
  • Models that were continued from a decayed checkpoint use the "dec" prefix for the first step count: e.g., 610m-clm-dec42k-mlm40-64k represents a 610M model first trained and decayed with CLM for 42k steps, then continued with MLM (40% masking ratio) for 22k more steps, totaling 64k.
  • By default, model names refer to the final checkpoint. Intermediate checkpoints are indicated by appending the step number at the end: e.g., 610m-mlm40-42k-1000 corresponds to checkpoint 1,000 of a 610M model trained with MLM (40% masking) for 42k steps.

First authors' contact information

Citation

If you found our work useful, please consider citing our paper:

@misc{gisserotboukhlef2025pretrainencodersmaskedlanguage,
      title={Should We Still Pretrain Encoders with Masked Language Modeling?}, 
      author={Hippolyte Gisserot-Boukhlef and Nicolas Boizard and Manuel Faysse and Duarte M. Alves and Emmanuel Malherbe and André F. T. Martins and Céline Hudelot and Pierre Colombo},
      year={2025},
      eprint={2507.00994},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.00994}, 
}

datasets 0

None public yet